Big Data

Posted on Posted in Garage

Note: This is tem­pora­ri­ly only an auto­ma­tic trans­la­ti­on.

BIG DATA - Pole­mic to Tech­no­lo­gy Trend


The Term Big Data

Wiki­pe­dia defi­nes the term: “→ BIG DATA descri­bes the use of lar­ge amounts of data from a varie­ty of sources with a high pro­ces­sing speed for gene­ra­ting eco­no­mic bene­fits”.

I would like to sketch Big Data as fol­lows.

BIG DATA is an IT tech­no­lo­gy for data pro­ces­sing of gigan­tic data volu­mes of various kinds ...

... con­sis­ting of a set of mea­su­res (tools, tech­no­lo­gies, IT archi­tec­tures, soft­ware, hard­ware), ...

... that does not use spot check, but as much data as pos­si­ble ...

... to make pro­ces­ses, things, actions (from tech­no­lo­gy, sci­ence, socie­ty, eco­no­my, etc.) more effi­ci­ent, trans­pa­rent and pre­dic­ta­ble.

Examp­les

I men­ti­on the fol­lo­wing four app­li­ca­ti­ons here for the cla­ri­fi­ca­ti­on of BIG-DATA.

Topic Elec­tion rese­arch - not an elec­to­ral trend is to be deter­mi­ned on the eve of an elec­tion by sam­pling, but the elec­tion result in advan­ce (!).

Topic nano­tech­no­lo­gy - it is not to deter­mi­ne the qua­li­ty of a semi­con­duc­tor by means of radia­ti­on phy­sics and soft­ware. Ins­te­ad, a cer­tain quan­ti­ty is reco­gni­zed as a pat­tern in advan­ce and tar­ge­ted without fric­tion loss with a resul­ting plan of action.

Topic of medi­ci­ne - pri­ma­te is no lon­ger a scree­ning stu­dy in which, due to pos­si­ble pre­con­di­ti­ons (eg her­edita­ry disea­se, stress fac­tors, envi­ron­ment), pri­ma­cy will be tar­ge­ted search and ensu­re pat­terns alre­ady befo­re pro­crea­ti­on, so that a new per­son to be born All gene­tic fac­tors of an ide­al per­son in a con­cre­te envi­ron­ment to be expec­ted (bio­lo­gi­cal­ly, psy­cho­lo­gi­cal­ly, phy­sio­lo­gi­cal­ly, etc.) to 100% ful­fil­led.

Topic pro­duc­tion pro­cess, inclu­ding the ener­gy sec­tor - it is no lon­ger respon­ding to a pro­gnostic need in the value-added pro­cess, but cau­sal and quan­ti­ta­ti­ve pat­terns are iden­ti­fied and crea­ted in immense, most diver­se pre­con­di­ti­ons, which ensu­re an abso­lu­te mar­ket con­trol (digi­tal real-time value crea­ti­on).

Advan­ta­ge

BIG DATA in its own LAN and WAN - used out­side of exter­nal clouds sen­si­b­ly - taking into account requi­re­ments from infor­ma­ti­on secu­ri­ty and qua­li­ty manage­ment - can initia­te a quan­tum leap in busi­ness deci­si­ons, con­s­i­derable leverage and eco­no­mic bene­fits for orga­ni­za­ti­ons (com­pa­nies, insti­tu­ti­ons, net­works).

BIG DATA in the CLOUD - this is (in my opi­ni­on) a deep back­ground which calls into ques­ti­on a bene­fit. It’s about the non-propagated goal ...

... to aggre­ga­te gigan­tic data sets of various sources in ...

... the­re to chan­nel this data by pat­tern reco­gni­ti­on, ...

... the­se clouds, in turn, are increa­singly less con­cen­tra­ted in the hands ...

... & pos­si­b­ly non-legitimate users (e.g. NSA).

Pole­mic to tech­no­lo­gy trend Big Data

Qua­li­ty manage­ment and its stra­te­gy goal Faults are increa­singly cha­rac­te­ri­zed by a trend set in all busi­ness pro­ces­ses. It is about ope­ning up new ave­nues for com­pe­ti­ti­ve advan­ta­ges, bene­fits and custo­mer satis­fac­tion. IT and value crea­ti­on in orga­ni­za­ti­ons are increa­singly and inse­pa­ra­b­ly lin­ked. Lar­ge data sets requi­re new tech­no­lo­gies - BIG DATA is the focus. BIG DATA - as a pro­duc­tion fac­tor. BIG DATA - the ter­ri­to­ries of the future?

BIG DATA’s mis­si­on is to ensu­re speed, effi­ci­en­cy, ana­ly­sis poten­ti­al, bene­fit and qua­li­ty when pro­ces­sing unst­ruc­tu­red, gigan­tic data volu­mes - if pos­si­ble in real-time. It is to use dif­fe­rent data sources. BIG DATA is capa­ble, among other things, to find hid­den pat­terns without the human fac­tor. BIG DATA is the ener­gy of the future for tech­no­lo­gi­cal self-sufficiency. Here, not only legal, but above all ethi­cal pro­blems ari­se.

How? Imple­men­ta­ti­on requi­res new cou­pled hard­ware and soft­ware solu­ti­ons. Cur­rent and emer­ging new tech­no­lo­gies, such as quan­tum com­pu­ting, new algo­rithms and sta­tis­ti­cal methods, Vir­tu­al Rea­li­ty (Vir­tu­al Rea­li­ty), NoS­QL (non-relational approach for lar­ge data­ba­ses), Hadoop, will make this increa­singly pos­si­ble.

Trend: By 2020, the digi­tal uni­ver­se will grow fif­ty times (!) To unbe­liev­a­ble 40 Zetta­byte (ZB) = 40,000,000,000,000,000,000,000,000,000 bytes. That would be 5 tera­bytes of data on every ear­t­h­qua­ke.

How to mana­ge the­se data mas­ses? Do you lea­ve it decen­tra­li­zed to users in local net­works - which in turn can net­work free­ly? Or - do few crea­te a cen­tral data mass sover­eign­ty by means of tech­ni­cal solu­ti­ons - com­pa­ra­ble to a new search for hege­mo­ny (redis­tri­bu­ti­on of the world)?

Cur­r­ent­ly BIG DATA is the magic word. Will BIG DATA after vir­tua­li­za­ti­on and CLOUD again only one over the street dri­ven sow, the money loo­se? On the other hand, a huge amount of unst­ruc­tu­red data from various sources (mea­su­rement results, sen­sor data, deve­lop­ment data, data from pro­cu­re­ment, dis­tri­bu­ti­on and logistics, geo- and infra­st­ruc­tu­re data, con­trol­ling data, custo­mer rela­ti­ons­hip data, other com­pa­ny data and docu­ments of various kinds, instal­la­ti­ons, Net­works, soci­al media resour­ces). It is not just about the here and now - it is about the stra­te­gic hand­ling of data.

Secu­ri­ty Con­s­i­de­ra­ti­ons? The data pro­tec­tion offi­cers appoin­ted by the sta­te do not have the right to do so. BIG DATA can be desi­gned accord­ing to the con­sti­tu­ti­on and data pro­tec­tion by means of anony­mous data records and sub­se­quent iden­ti­fi­ca­ti­on. In the face of → XKey­score hard to belie­ve. Fact is - com­pli­an­ce aspects, espe­ci­al­ly to pro­tect con­su­mers and busi­nes­ses, are high. Ulti­mate­ly, howe­ver, this is only was­te, as long as federal sover­eign­ty is not the pre­ro­ga­ti­ve of this coun­try and its elec­ted government, accord­ing to → Basic Law Arti­cle 2 para.

Busi­ness use - is the­re only for lar­ge com­pa­nies like Goog­le or various ser­vice pro­vi­ders? Is busi­ness use also visi­ble to small and medium-sized enter­pri­ses? Do users also see their bene­fits or do they sup­ply only the data?

Prac­tical topics and objec­tives of BIG DATA could be: Com­plex mode­ling and simu­la­ti­on in rese­arch and deve­lop­ment, high sca­la­bi­li­ty and visua­li­za­ti­on into mole­cu­lar struc­tures, opti­mi­za­ti­on of the value chain, deve­lop­ment of cau­sal con­nec­tions, app­li­ca­ti­on sce­n­a­ri­os, data mas­ses in real-time pro­ces­sing, proac­tive action, Error detec­tion befo­re ent­e­ring, busi­ness pro­cess ana­ly­sis, mode­ling, opti­mi­za­ti­on in all busi­ness are­as. We can con­ti­nue this chain to soft­ware engi­nee­ring as the “pro­duc­tion tech­no­lo­gy of the 21st cen­tu­ry” (eg in plant engi­nee­ring and nano­tech­no­lo­gy).

The soft­ware deve­lop­ment for BIG DATA is faced with very spe­cial chal­len­ges and pos­si­bi­li­ties. In this case, the pri­ma­te will no lon­ger con­sist of intel­li­gent algo­rithms, but the high amount of data. Data are not direc­ted at first - they are being rese­ar­ched. Tra­di­tio­nal­ly, the user spe­ci­fied which data to chan­nel. The deve­l­o­pers crea­ted the solu­ti­on. The user used this and made rebuilds as nee­ded. The exten­ded approach using BIG DATA con­sists in the pur­su­it of an explo­ra­to­ry, rese­ar­ching solu­ti­on. Defi­ned data sources are to be ana­ly­zed accord­ing to pat­terns to be deve­lo­ped by various BIG DATA plat­forms. The user uses the results and deci­des, if necessa­ry, about addi­tio­nal new data sources.

Inde­pen­dent of Cloud? BIG DATA should also be fea­si­ble in the local net­work or WAN - without a cloud, as an insula- tion - or a free choice of fur­t­her net­wor­king. The data volu­mes are gro­wing expo­nen­ti­al­ly. With the­se data mas­ses, medium-sized com­pa­nies also ensu­re time­ly ana­ly­zes. Sim­ply con­clu­ded: Lar­ge data = gre­at oppor­tu­nities to be the first, to reco­gni­ze trends in time, to sol­ve pro­blems and solu­ti­ons effi­ci­ent­ly.

Data cha­os domi­na­tes us increa­singly. Whe­re we have our data in a typi­cal orga­ni­za­ti­on (com­pa­ny, insti­tu­ti­on, etc.)? Loca­ti­ons such as local pro­fi­le (archi­ve), vir­tu­al ser­ver, file ser­ver, rela­tio­nal data­ba­se sys­tems, ERP sys­tem, CRM sys­tem, CMS sys­tem, rela­ti­ons­hip sys­tem, enter­pri­se wizard, pro­ject manage­ment sys­tem, intra­net, extra­net, Soci­al net­works, mobi­le com­mu­ni­ca­ti­ons, hund­reds of Excel she­ets, end­less pre­sen­ta­ti­ons, thousands of docu­ments or memos.

How? How do we mas­ter this? Our many data sources let us first cla­ri­fy the core ques­ti­on - whe­re can I find what is wan­ted? The second ques­ti­on then - how do I chan­nel it. In the end the ques­ti­on is - I have over­loo­ked what. We do not talk about the time. A solu­ti­on must be.

One could dis­miss the topic, eg due to cur­rent events (point NSA data delu­si­on). We would have crea­ted the cloud only for others, who would strip our resour­ces. BIG DATA is now the tool to put our ent­i­re data world into this cloud.

Mis­gi­ving? Apart from pos­si­ble legal ques­ti­ons (data collec­tion) the­re would also be ethi­cal con­cerns with spe­ci­fic app­li­ca­ti­ons (gene­tics, eg geno­me sequen­cing as an eutha­na­sia deci­si­on in advan­ce). An uni­ma­gi­ned inno­va­ti­on wave could also be the result. Would it be con­troll­ab­le? Would civi­li­za­ti­on be ready? Mas­si­ve col­la­te­ral con­se­quen­ces in eco­no­mics, sci­ence and socie­ty would ari­se. Would that be so?

BIG DATA - the end of sci­en­ti­fic theo­ry? No, auto­ma­ted cor­re­la­ti­ons in a machine-to-machine com­mu­ni­ca­ti­on can pro­ve to be a litt­le “thought out”. Ulti­mate­ly, they can beco­me fal­si­fied and even dan­ge­rous to orga­ni­za­ti­ons and indi­vi­du­als. Pos­si­ble inde­pen­dent pro­ces­ses without human influ­ence, even in socie­tal and sover­eign pro­ces­ses, would no lon­ger be sett­led in the world of sci­ence fic­tion, but in today’s world.

Visi­on

Intro­du­ce yours­elf ...

... this trend could only be view­ed and sol­ved in a glo­bal con­text.

Not a cen­tra­li­za­ti­on of big data in clouds and the hands of less pro­vi­ders can be the goal. The goal would be a decen­tra­li­za­ti­on of data mas­ses (indi­vi­du­al big data struc­tures) in a frac­tal­ly orga­ni­zed world (level).

The­re would be vol­un­ta­ry deploy­ment in net­works, new real-time tech­no­lo­gies for the bene­fit of lar­ge amounts of data in an envi­ron­ment of sharing.

It is done ana­lo­gous to the Inter­net or a decen­tra­li­zed ener­gy sup­ply (Ener­gy Web) whe­re ever­yo­ne is requi­red and pro­mo­ted in an envi­ron­ment of giving and taking. Decen­tra­li­za­ti­on means bene­fits for ever­yo­ne and low vul­nera­bi­li­ty to the who­le.

Decen­tra­li­za­ti­on of a frac­tal BIG DATA means the pos­si­bi­li­ty of con­trol, con­stant rene­wal and par­ti­ci­pa­ti­on for all com­pon­ents - as in an orga­nism, as in a quan­tum pro­cess of con­stant rene­wal.

wolle-ing_stift-01-2

Hier kommentieren oder hinterfragen: