Note: This is temporarily only an automatic translation.
BIG DATA - Polemic to Technology Trend
The Term Big Data
Wikipedia defines the term: “→ BIG DATA describes the use of large amounts of data from a variety of sources with a high processing speed for generating economic benefits”.
I would like to sketch Big Data as follows.
BIG DATA is an IT technology for data processing of gigantic data volumes of various kinds ...
... consisting of a set of measures (tools, technologies, IT architectures, software, hardware), ...
... that does not use spot check, but as much data as possible ...
... to make processes, things, actions (from technology, science, society, economy, etc.) more efficient, transparent and predictable.
I mention the following four applications here for the clarification of BIG-DATA.
Topic Election research - not an electoral trend is to be determined on the eve of an election by sampling, but the election result in advance (!).
Topic nanotechnology - it is not to determine the quality of a semiconductor by means of radiation physics and software. Instead, a certain quantity is recognized as a pattern in advance and targeted without friction loss with a resulting plan of action.
Topic of medicine - primate is no longer a screening study in which, due to possible preconditions (eg hereditary disease, stress factors, environment), primacy will be targeted search and ensure patterns already before procreation, so that a new person to be born All genetic factors of an ideal person in a concrete environment to be expected (biologically, psychologically, physiologically, etc.) to 100% fulfilled.
Topic production process, including the energy sector - it is no longer responding to a prognostic need in the value-added process, but causal and quantitative patterns are identified and created in immense, most diverse preconditions, which ensure an absolute market control (digital real-time value creation).
BIG DATA in its own LAN and WAN - used outside of external clouds sensibly - taking into account requirements from information security and quality management - can initiate a quantum leap in business decisions, considerable leverage and economic benefits for organizations (companies, institutions, networks).
BIG DATA in the CLOUD - this is (in my opinion) a deep background which calls into question a benefit. It’s about the non-propagated goal ...
... to aggregate gigantic data sets of various sources in ...
... there to channel this data by pattern recognition, ...
... these clouds, in turn, are increasingly less concentrated in the hands ...
... & possibly non-legitimate users (e.g. NSA).
Polemic to technology trend Big Data
Quality management and its strategy goal Faults are increasingly characterized by a trend set in all business processes. It is about opening up new avenues for competitive advantages, benefits and customer satisfaction. IT and value creation in organizations are increasingly and inseparably linked. Large data sets require new technologies - BIG DATA is the focus. BIG DATA - as a production factor. BIG DATA - the territories of the future?
BIG DATA’s mission is to ensure speed, efficiency, analysis potential, benefit and quality when processing unstructured, gigantic data volumes - if possible in real-time. It is to use different data sources. BIG DATA is capable, among other things, to find hidden patterns without the human factor. BIG DATA is the energy of the future for technological self-sufficiency. Here, not only legal, but above all ethical problems arise.
How? Implementation requires new coupled hardware and software solutions. Current and emerging new technologies, such as quantum computing, new algorithms and statistical methods, Virtual Reality (Virtual Reality), NoSQL (non-relational approach for large databases), Hadoop, will make this increasingly possible.
Trend: By 2020, the digital universe will grow fifty times (!) To unbelievable 40 Zettabyte (ZB) = 40,000,000,000,000,000,000,000,000,000 bytes. That would be 5 terabytes of data on every earthquake.
How to manage these data masses? Do you leave it decentralized to users in local networks - which in turn can network freely? Or - do few create a central data mass sovereignty by means of technical solutions - comparable to a new search for hegemony (redistribution of the world)?
Currently BIG DATA is the magic word. Will BIG DATA after virtualization and CLOUD again only one over the street driven sow, the money loose? On the other hand, a huge amount of unstructured data from various sources (measurement results, sensor data, development data, data from procurement, distribution and logistics, geo- and infrastructure data, controlling data, customer relationship data, other company data and documents of various kinds, installations, Networks, social media resources). It is not just about the here and now - it is about the strategic handling of data.
Security Considerations? The data protection officers appointed by the state do not have the right to do so. BIG DATA can be designed according to the constitution and data protection by means of anonymous data records and subsequent identification. In the face of → XKeyscore hard to believe. Fact is - compliance aspects, especially to protect consumers and businesses, are high. Ultimately, however, this is only waste, as long as federal sovereignty is not the prerogative of this country and its elected government, according to → Basic Law Article 2 para.
Business use - is there only for large companies like Google or various service providers? Is business use also visible to small and medium-sized enterprises? Do users also see their benefits or do they supply only the data?
Practical topics and objectives of BIG DATA could be: Complex modeling and simulation in research and development, high scalability and visualization into molecular structures, optimization of the value chain, development of causal connections, application scenarios, data masses in real-time processing, proactive action, Error detection before entering, business process analysis, modeling, optimization in all business areas. We can continue this chain to software engineering as the “production technology of the 21st century” (eg in plant engineering and nanotechnology).
The software development for BIG DATA is faced with very special challenges and possibilities. In this case, the primate will no longer consist of intelligent algorithms, but the high amount of data. Data are not directed at first - they are being researched. Traditionally, the user specified which data to channel. The developers created the solution. The user used this and made rebuilds as needed. The extended approach using BIG DATA consists in the pursuit of an exploratory, researching solution. Defined data sources are to be analyzed according to patterns to be developed by various BIG DATA platforms. The user uses the results and decides, if necessary, about additional new data sources.
Independent of Cloud? BIG DATA should also be feasible in the local network or WAN - without a cloud, as an insula- tion - or a free choice of further networking. The data volumes are growing exponentially. With these data masses, medium-sized companies also ensure timely analyzes. Simply concluded: Large data = great opportunities to be the first, to recognize trends in time, to solve problems and solutions efficiently.
Data chaos dominates us increasingly. Where we have our data in a typical organization (company, institution, etc.)? Locations such as local profile (archive), virtual server, file server, relational database systems, ERP system, CRM system, CMS system, relationship system, enterprise wizard, project management system, intranet, extranet, Social networks, mobile communications, hundreds of Excel sheets, endless presentations, thousands of documents or memos.
How? How do we master this? Our many data sources let us first clarify the core question - where can I find what is wanted? The second question then - how do I channel it. In the end the question is - I have overlooked what. We do not talk about the time. A solution must be.
One could dismiss the topic, eg due to current events (point NSA data delusion). We would have created the cloud only for others, who would strip our resources. BIG DATA is now the tool to put our entire data world into this cloud.
Misgiving? Apart from possible legal questions (data collection) there would also be ethical concerns with specific applications (genetics, eg genome sequencing as an euthanasia decision in advance). An unimagined innovation wave could also be the result. Would it be controllable? Would civilization be ready? Massive collateral consequences in economics, science and society would arise. Would that be so?
BIG DATA - the end of scientific theory? No, automated correlations in a machine-to-machine communication can prove to be a little “thought out”. Ultimately, they can become falsified and even dangerous to organizations and individuals. Possible independent processes without human influence, even in societal and sovereign processes, would no longer be settled in the world of science fiction, but in today’s world.
Introduce yourself ...
... this trend could only be viewed and solved in a global context.
Not a centralization of big data in clouds and the hands of less providers can be the goal. The goal would be a decentralization of data masses (individual big data structures) in a fractally organized world (level).
There would be voluntary deployment in networks, new real-time technologies for the benefit of large amounts of data in an environment of sharing.
It is done analogous to the Internet or a decentralized energy supply (Energy Web) where everyone is required and promoted in an environment of giving and taking. Decentralization means benefits for everyone and low vulnerability to the whole.
Decentralization of a fractal BIG DATA means the possibility of control, constant renewal and participation for all components - as in an organism, as in a quantum process of constant renewal.