891 resultados para individual zones of optimal functioning model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Цветомир Цачев - В настоящия доклад се прави преглед на някои резултати от областта на оптималното управление на непрекъснатите хетерогенни системи, публикувани в периодичната научна литература в последните години. Една динамична система се нарича хетерогенна, ако всеки от нейните елементи има собствена динамиката. Тук разглеждаме оптимално управление на системи, чиято хетерогенност се описва с едномерен или двумерен параметър – на всяка стойност на параметъра отговаря съответен елемент на системата. Хетерогенните динамични системи се използват за моделиране на процеси в икономиката, епидемиологията, биологията, опазване на обществената сигурност (ограничаване на използването на наркотици) и др. Тук разглеждаме модел на оптимално инвестиране в образование на макроикономическо ниво [11], на ограничаване на последствията от разпространението на СПИН [9], на пазар на права за въглеродни емисии [3, 4] и на оптимален макроикономически растеж при повишаване на нивото на върховите технологии [1]. Ключови думи: оптимално управление, непрекъснати хетерогенни динамични системи, приложения в икономиката и епидемиолегията

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Video streaming via Transmission Control Protocol (TCP) networks has become a popular and highly demanded service, but its quality assessment in both objective and subjective terms has not been properly addressed. In this paper, based on statistical analysis a full analytic model of a no-reference objective metric, namely pause intensity (PI), for video quality assessment is presented. The model characterizes the video playout buffer behavior in connection with the network performance (throughput) and the video playout rate. This allows for instant quality measurement and control without requiring a reference video. PI specifically addresses the need for assessing the quality issue in terms of the continuity in the playout of TCP streaming videos, which cannot be properly measured by other objective metrics such as peak signal-to-noise-ratio, structural similarity, and buffer underrun or pause frequency. The performance of the analytical model is rigidly verified by simulation results and subjective tests using a range of video clips. It is demonstrated that PI is closely correlated with viewers' opinion scores regardless of the vastly different composition of individual elements, such as pause duration and pause frequency which jointly constitute this new quality metric. It is also shown that the correlation performance of PI is consistent and content independent. © 2013 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Radio Frequency Identification Technology (RFID) adoption in healthcare settings has the potential to reduce errors, improve patient safety, streamline operational processes and enable the sharing of information throughout supply chains. RFID adoption in the English NHS is limited to isolated pilot studies. Firstly, this study investigates the drivers and inhibitors to RFID adoption in the English NHS from the perspective of the GS1 Healthcare User Group (HUG) tasked with coordinating adoption across private and public sectors. Secondly a conceptual model has been developed and deployed, combining two of foresight’s most popular methods; scenario planning and technology roadmapping. The model addresses the weaknesses of each foresight technique as well as capitalizing on their individual, inherent strengths. Semi structured interviews, scenario planning workshops and a technology roadmapping exercise were conducted with the members of the HUG over an 18-month period. An action research mode of enquiry was utilized with a thematic analysis approach for the identification and discussion of the drivers and inhibitors of RFID adoption. The results of the conceptual model are analysed in comparison to other similar models. There are implications for managers responsible for RFID adoption in both the NHS and its commercial partners, and for foresight practitioners. Managers can leverage the insights gained from identifying the drivers and inhibitors to RFID adoption by making efforts to influence the removal of inhibitors and supporting the continuation of the drivers. The academic contribution of this aspect of the thesis is in the field of RFID adoption in healthcare settings. Drivers and inhibitors to RFID adoption in the English NHS are compared to those found in other settings. The implication for technology foresight practitioners is a proof of concept of a model combining scenario planning and technology roadmapping using a novel process. The academic contribution to the field of technology foresight is the conceptual development of foresight model that combines two popular techniques and then a deployment of the conceptual foresight model in a healthcare setting exploring the future of RFID technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this descriptive study was to evaluate the banking and insurance technology curriculum at ten junior colleges in Taiwan. The study focused on curriculum, curriculum materials, instruction, support services, student achievement and job performance. Data was collected from a diverse sample of faculty, students, alumni, and employers. ^ Questionnaires on the evaluation of curriculum at technical junior colleges were developed for use in this specific case. Data were collected from the sample described above and analyzed utilizing ANOVA, T-Tests and crosstabulations. Findings are presented which indicate that there is room for improvement in terms of meeting individual students' needs. ^ Using Stufflebeam's CIPP model for curriculum evaluation it was determined that the curriculum was adequate in terms of the knowledge and skills imparted to students. However, students were dissatisfied with the rigidity of the curriculum and the lack of opportunity to satisfy the individual needs of students. Employers were satisfied with both the academic preparation of students and their on the job performance. ^ In sum, the curriculum of the two-year banking and insurance technology programs of junior college in Taiwan was shown to have served adequately preparing a work force to enter businesses. It is now time to look toward the future and adapt the curriculum and instruction for the future needs of the ever evolving high-tech society. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Miami-Dade County implemented a series of water conservation programs, which included rebate/exchange incentives to encourage the use of high efficiency aerators (AR), showerheads (SH), toilets (HET) and clothes washers (HEW), to respond to the environmental sustainability issue in urban areas. This study first used panel data analysis of water consumption to evaluate the performance and actual water savings of individual programs. Integrated water demand model has also been developed for incorporating property’s physical characteristics into the water consumption profiles. Life cycle assessment (with emphasis on end-use stage in water system) of water intense appliances was conducted to determine the environmental impacts brought by each practice. Approximately 6 to 10 % of water has been saved in the first and second year of implementation of high efficiency appliances, and with continuing savings in the third and fourth years. Water savings (gallons per household per day) for water efficiency appliances were observed at 28 (11.1%) for SH, 34.7 (13.3%) for HET, and 39.7 (14.5%) for HEW. Furthermore, the estimated contributions of high efficiency appliances for reducing water demand in the integrated water demand model were between 5 and 19% (highest in the AR program). Results indicated that adoption of more than one type of water efficiency appliance could significantly reduce residential water demand. For the sustainable water management strategies, the appropriate water conservation rate was projected to be 1 to 2 million gallons per day (MGD) through 2030. With 2 MGD of water savings, the estimated per capita water use (GPCD) could be reduced from approximately 140 to 122 GPCD. Additional efforts are needed to reduce the water demand to US EPA’s “Water Sense” conservation levels of 70 GPCD by 2030. Life cycle assessment results showed that environmental impacts (water and energy demands and greenhouse gas emissions) from end-use and demand phases are most significant within the water system, particularly due to water heating (73% for clothes washer and 93% for showerhead). Estimations of optimal lifespan for appliances (8 to 21 years) implied that earlier replacement with efficiency models is encouraged in order to minimize the environmental impacts brought by current practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We analyzed the dynamics of freshwater marsh vegetation of Taylor Slough in eastern Everglades National Park for the 1979 to 2003 period, focusing on cover of individual plant species and on cover and composition of marsh communities in areas potentially influenced by a canal pump station (‘‘S332’’) and its successor station (‘‘S332D’’). Vegetation change analysis incorporated the hydrologic record at these sites for three intervals: pre-S332 (1961–1980), S332 (1980–1999), post-S332 (1999–2002). During S332 and post-S332 intervals, water level in Taylor Slough was affected by operations of S332 and S332D. To relate vegetation change to plot-level hydrological conditions in Taylor Slough, we developed a weighted averaging regression and calibration model (WA) using data from the marl prairies of Everglades National Park and Big Cypress National Preserve. We examined vegetation pattern along five transects. Transects 1–3 were established in 1979 south of the water delivery structures, and were influenced by their operations. Transects 4 and 5 were established in 1997, the latter west of these structures and possibly under their influence. Transect 4 was established in the northern drainage basin of Taylor Slough, beyond the likely zones of influence of S332 and S332D. The composition of all three southern transects changed similarly after 1979. Where muhly grass (Muhlenbergia capillaris var. filipes) was once dominant, sawgrass (Cladium jamaicense), replaced it, while where sawgrass initially predominated, hydric species such as spikerush (Eleocharis cellulosa Torr.) overtook it. Most of the changes in species dominance in Transects 1–3 occurred after 1992, were mostly in place by 1995–1996, and continued through 1999, indicating how rapidly vegetation in seasonal Everglades marshes can respond to hydrological modifications. During the post-S332 period, these long-term trends began reversing. In the two northern transects, total cover and dominance of both muhly grass and sawgrass increased from 1997 to 2003. Thus, during the 1990’s, vegetation composition south of S332 became more like that of long hydroperiod marshes, but afterward it partially returned to its 1979 condition, i.e., a community characteristic of less prolonged flooding. In contrast, the vegetation change along the two northern transects since 1997 showed little relationship to hydrologic status.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A description and model of the near-surface hydrothermal system at Casa Diablo, with its implications for the larger-scale hydrothermal system of Long Valley, California, is presented. The data include resistivity profiles with penetrations to three different depth ranges, and analyses of inorganic mercury concentrations in 144 soil samples taken over a 1.3 by 1.7 km area. Analyses of the data together with the mapping of active surface hydrothermal features (fumaroles, mudpots, etc.), has revealed that the relationship between the hydrothermal system, surface hydrothermal activity, and mercury anomalies is strongly controlled by faults and topography. There are, however, more subtle factors responsible for the location of many active and anomalous zones such as fractures, zones of high permeability, and interactions between hydrothermal and cooler groundwater. In addition, the near-surface location of the upwelling from the deep hydrothermal reservoir, which supplies the geothermal power plants at Casa Diablo and the numerous hot pools in the caldera with hydrothermal water, has been detected. The data indicate that after upwelling the hydrothermal water flows eastward at shallow depth for at least 2 km and probably continues another 10 km to the east, all the way to Lake Crowley.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Twenty-four manganese nodules from the surface of the sea floor and fifteen buried nodules were studied. With three exceptions, the nodules were collected from the area covered by Valdivia Cruise VA 04 some 1200 nautical miles southeast of Hawaii. Age determinations were made using the ionium method. In order to get a true reproduction of the activity distribution in the nodules, they were cut in half and placed for one month on nuclear emulsion plates to determine the alpha-activity of the ionium and its daughter products. Special methods of counting the alpha-tracks resolution to depth intervals of 0.125 mm. For the first time it was possible to resolve zones of rapid growth (impulse growth) with growth rates, s > 50 mm/106 yr and interruptions in growth. With few exceptions the average rate of growth of all nodules was surprisingly uniform at 4-9 mm/10 yr. No growth could be recognized radioactively in the buried nodules. One exceptional nodule has had recent impulse growth and, in the material formed, the ionium is not yet in equilibrium with its daughter products. Individual layers in one nodule from the Indian Ocean could be dated and an average time interval of t = 2600±400 yr was necessary to form one layer. The alternation between iron and manganese-rich parts of the nodules was made visible by colour differences resulting from special treatment of cut surfaces with HCl vapour. The zones of slow growth of one nodule are relatively enriched in iron. Earlier attempts to find paleomagnetic reversals in manganese nodules have been continued. Despite considerable improvement in areal resolution, reversals were not detected in the nodules studied. Comparisons of the surface structure, microstructure in section and the radiometric dating show that there are erosion surfaces and growth surfaces on the outer surfaces of the manganese nodules. The formation of cracks in the nodules was studied in particular. The model of age-dependent nodule shrinkage and cracking surprisingly indicates that the nodules break after exceeding a certain age and/or size. Consequently, the breaking apart of manganese nodules is a continuous process not of catastrophic or discontinuous origin. The microstructure of the nodules exhibits differences in the mechanism of accretion and accretion rate of material, shortly referred to as accretion form. Thus non-directional growth inside the nodules as well as a directional growth may be observed. Those nodules with large accretion forms have grown faster than smaller ones. Consequently, parallel layers indicate slow growth. The upper surfaces of the nodules, protruding into the bottom water appear to be more prone to growth disturbances than the lower surfaces, immersed in the sediment. Features of some nodules show, that as they develop, they neither turned nor rolled. Yet unknown is the mechanism that keeps the nodules at the surface during continuous sedimentation. All in all, the nodules remain the objects of their own distinctive problems. The hope of using them as a kind of history book still seems to be very remote.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the importance of renewable energy well-established worldwide, and targets of such energy quantified in many cases, there exists a considerable interest in the assessment of wind and wave devices. While the individual components of these devices are often relatively well understood and the aspects of energy generation well researched, there seems to be a gap in the understanding of these devices as a whole and especially in the field of their dynamic responses under operational conditions. The mathematical modelling and estimation of their dynamic responses are more evolved but research directed towards testing of these devices still requires significant attention. Model-free indicators of the dynamic responses of these devices are important since it reflects the as-deployed behaviour of the devices when the exposure conditions are scaled reasonably correctly, along with the structural dimensions. This paper demonstrates how the Hurst exponent of the dynamic responses of a monopile exposed to different exposure conditions in an ocean wave basin can be used as a model-free indicator of various responses. The scaled model is exposed to Froude scaled waves and tested under different exposure conditions. The analysis and interpretation is carried out in a model-free and output-only environment, with only some preliminary ideas regarding the input of the system. The analysis indicates how the Hurst exponent can be an interesting descriptor to compare and contrast various scenarios of dynamic response conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The gravitationally confined detonation (GCD) model has been proposed as a possible explosion mechanism for Type Ia supernovae in the single-degenerate evolution channel. It starts with ignition of a deflagration in a single off-centre bubble in a near-Chandrasekhar-mass white dwarf. Driven by buoyancy, the deflagration flame rises in a narrow cone towards the surface. For the most part, the main component of the flow of the expanding ashes remains radial, but upon reaching the outer, low-pressure layers of the white dwarf, an additional lateral component develops. This causes the deflagration ashes to converge again at the opposite side, where the compression heats fuel and a detonation may be launched. We first performed five three-dimensional hydrodynamic simulations of the deflagration phase in 1.4 M⊙ carbon/oxygen white dwarfs at intermediate-resolution (2563computational zones). We confirm that the closer the initial deflagration is ignited to the centre, the slower the buoyant rise and the longer the deflagration ashes takes to break out and close in on the opposite pole to collide. To test the GCD explosion model, we then performed a high-resolution (5123 computational zones) simulation for a model with an ignition spot offset near the upper limit of what is still justifiable, 200 km. This high-resolution simulation met our deliberately optimistic detonation criteria, and we initiated a detonation. The detonation burned through the white dwarf and led to its complete disruption. For this model, we determined detailed nucleosynthetic yields by post-processing 106 tracer particles with a 384 nuclide reaction network, and we present multi-band light curves and time-dependent optical spectra. We find that our synthetic observables show a prominent viewing-angle sensitivity in ultraviolet and blue wavelength bands, which contradicts observed SNe Ia. The strong dependence on the viewing angle is caused by the asymmetric distribution of the deflagration ashes in the outer ejecta layers. Finally, we compared our model to SN 1991T. The overall flux level of the model is slightly too low, and the model predicts pre-maximum light spectral features due to Ca, S, and Si that are too strong. Furthermore, the model chemical abundance stratification qualitatively disagrees with recent abundance tomography results in two key areas: our model lacks low-velocity stable Fe and instead has copious amounts of high-velocity 56Ni and stable Fe. We therefore do not find good agreement of the model with SN 1991T.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To get a better insight into the radiological features of industrial by-products that can be reused in building materials a review of the reported scientific data can be very useful. The current study is based on the continuously growing database of the By-BM (H2020-MSCA-IF-2015) project (By-products for Building Materials). Currently, the By-BM database contains individual data of about 431 by-products and 1095 building and raw materials. It was found that in case of the building materials the natural radionuclide content varied widely (Ra-226: <DL-27851 Bq/kg; Th-232: <DL-906 Bq/kg, K-40: <DL-17922 Bq/kg), more so than for the by-products (Ra-226: 7-3152 Bq/kg; Th-232: <DL-1350 Bq/kg, K-40: <DL-3001 Bq/kg). The average Ra-226, Th-232 and K-40 contents of the reported by-products were respectively 2.52, 2.35 and 0.39 times higher than the building materials. The gamma exposure of bulk building products was calculated according to IAEA Specific Safety Guide No. SSG-32 and the European Commission Radiation Protection 112 based I-index (EU BSS). It was found that in most cases the I-index without density consideration provides a significant overestimation in excess effective dose.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die Fähigkeit, geschriebene Texte zu verstehen, d.h. eine kohärente mentale Repräsentation von Textinhalten zu erstellen, ist eine notwendige Voraussetzung für eine erfolgreiche schulische und außerschulische Entwicklung. Es ist daher ein zentrales Anliegen des Bildungssystems Leseschwierigkeiten frühzeitig zu diagnostizieren und mithilfe zielgerichteter Interventionsprogramme zu fördern. Dies erfordert ein umfassendes Wissen über die kognitiven Teilprozesse, die dem Leseverstehen zugrunde liegen, ihre Zusammenhänge und ihre Entwicklung. Die vorliegende Dissertation soll zu einem umfassenden Verständnis über das Leseverstehen beitragen, indem sie eine Auswahl offener Fragestellungen experimentell untersucht. Studie 1 untersucht inwieweit phonologische Rekodier- und orthographische Dekodierfertigkeiten zum Satz- und Textverstehen beitragen und wie sich beide Fertigkeiten bei deutschen Grundschüler(inne)n von der 2. bis zur 4. Klasse entwickeln. Die Ergebnisse legen nahe, dass beide Fertigkeiten signifikante und eigenständige Beiträge zum Leseverstehen leisten und dass sich ihr relativer Beitrag über die Klassenstufen hinweg nicht verändert. Darüber hinaus zeigt sich, dass bereits deutsche Zweitklässler(innen) den Großteil geschriebener Wörter in altersgerechten Texten über orthographische Vergleichsprozesse erkennen. Nichtsdestotrotz nutzen deutsche Grundschulkinder offenbar kontinuierlich phonologische Informationen, um die visuelle Worterkennung zu optimieren. Studie 2 erweitert die bisherige empirische Forschung zu einem der bekanntesten Modelle des Leseverstehens—der Simple View of Reading (SVR, Gough & Tunmer, 1986). Die Studie überprüft die SVR (Reading comprehension = Decoding x Comprehension) mithilfe optimierter und methodisch stringenter Maße der Modellkonstituenten und überprüft ihre Generalisierbarkeit für deutsche Dritt- und Viertklässler(innen). Studie 2 zeigt, dass die SVR einer methodisch stringenten Überprüfung nicht standhält und nicht ohne Weiteres auf deutsche Dritt- und Viertklässler(innen) generalisiert werden kann. Es wurden nur schwache Belege für eine multiplikative Verknüpfung von Dekodier- (D) und Hörverstehensfertigkeiten (C) gefunden. Der Umstand, dass ein beachtlicher Teil der Varianz im Leseverstehen (R) nicht durch D und C aufgeklärt werden konnte, deutet darauf hin, dass das Modell nicht vollständig ist und ggf. durch weitere Komponenten ergänzt werden muss. Studie 3 untersucht die Verarbeitung positiv-kausaler und negativ-kausaler Kohärenzrelationen bei deutschen Erst- bis Viertklässler(inne)n und Erwachsenen im Lese- und Hörverstehen. In Übereinstimmung mit dem Cumulative Cognitive Complexity-Ansatz (Evers-Vermeul & Sanders, 2009; Spooren & Sanders, 2008) zeigt Studie 3, dass die Verarbeitung negativ-kausaler Kohärenzrelationen und Konnektoren kognitiv aufwändiger ist als die Verarbeitung positiv-kausaler Relationen. Darüber hinaus entwickelt sich das Verstehen beider Kohärenzrelationen noch über die Grundschulzeit hinweg und ist für negativ-kausale Relationen am Ende der vierten Klasse noch nicht abgeschlossen. Studie 4 zeigt und diskutiert die Nützlichkeit prozess-orientierter Lesetests wie ProDi- L (Richter et al., in press), die individuelle Unterschiede in den kognitiven Teilfertigkeiten des Leseverstehens selektiv erfassen. Hierzu wird exemplarisch die Konstruktvalidität des ProDi-L-Subtests ‚Syntaktische Integration’ nachgewiesen. Mittels explanatorischer Item- Repsonse-Modelle wird gezeigt, dass der Test Fertigkeiten syntaktischer Integration separat erfasst und Kinder mit defizitären syntaktischen Fertigkeiten identifizieren kann. Die berichteten Befunde tragen zu einem umfassenden Verständnis der kognitiven Teilfertigkeiten des Leseverstehens bei, das für eine optimale Gestaltung des Leseunterrichts, für das Erstellen von Lernmaterialien, Leseinstruktionen und Lehrbüchern unerlässlich ist. Darüber hinaus stellt es die Grundlage für eine sinnvolle Diagnose individueller Leseschwierigkeiten und für die Konzeption adaptiver und zielgerichteter Interventionsprogramme zur Förderung des Leseverstehens bei schwachen Leser(inne)n dar.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The blast furnace is the main ironmaking production unit in the world which converts iron ore with coke and hot blast into liquid iron, hot metal, which is used for steelmaking. The furnace acts as a counter-current reactor charged with layers of raw material of very different gas permeability. The arrangement of these layers, or burden distribution, is the most important factor influencing the gas flow conditions inside the furnace, which dictate the efficiency of the heat transfer and reduction processes. For proper control the furnace operators should know the overall conditions in the furnace and be able to predict how control actions affect the state of the furnace. However, due to high temperatures and pressure, hostile atmosphere and mechanical wear it is very difficult to measure internal variables. Instead, the operators have to rely extensively on measurements obtained at the boundaries of the furnace and make their decisions on the basis of heuristic rules and results from mathematical models. It is particularly difficult to understand the distribution of the burden materials because of the complex behavior of the particulate materials during charging. The aim of this doctoral thesis is to clarify some aspects of burden distribution and to develop tools that can aid the decision-making process in the control of the burden and gas distribution in the blast furnace. A relatively simple mathematical model was created for simulation of the distribution of the burden material with a bell-less top charging system. The model developed is fast and it can therefore be used by the operators to gain understanding of the formation of layers for different charging programs. The results were verified by findings from charging experiments using a small-scale charging rig at the laboratory. A basic gas flow model was developed which utilized the results of the burden distribution model to estimate the gas permeability of the upper part of the blast furnace. This combined formulation for gas and burden distribution made it possible to implement a search for the best combination of charging parameters to achieve a target gas temperature distribution. As this mathematical task is discontinuous and non-differentiable, a genetic algorithm was applied to solve the optimization problem. It was demonstrated that the method was able to evolve optimal charging programs that fulfilled the target conditions. Even though the burden distribution model provides information about the layer structure, it neglects some effects which influence the results, such as mixed layer formation and coke collapse. A more accurate numerical method for studying particle mechanics, the Discrete Element Method (DEM), was used to study some aspects of the charging process more closely. Model charging programs were simulated using DEM and compared with the results from small-scale experiments. The mixed layer was defined and the voidage of mixed layers was estimated. The mixed layer was found to have about 12% less voidage than layers of the individual burden components. Finally, a model for predicting the extent of coke collapse when heavier pellets are charged over a layer of lighter coke particles was formulated based on slope stability theory, and was used to update the coke layer distribution after charging in the mathematical model. In designing this revision, results from DEM simulations and charging experiments for some charging programs were used. The findings from the coke collapse analysis can be used to design charging programs with more stable coke layers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The speed with which data has moved from being scarce, expensive and valuable, thus justifying detailed and careful verification and analysis to a situation where the streams of detailed data are almost too large to handle has caused a series of shifts to occur. Legal systems already have severe problems keeping up with, or even in touch with, the rate at which unexpected outcomes flow from information technology. The capacity to harness massive quantities of existing data has driven Big Data applications until recently. Now the data flows in real time are rising swiftly, become more invasive and offer monitoring potential that is eagerly sought by commerce and government alike. The ambiguities as to who own this often quite remarkably intrusive personal data need to be resolved – and rapidly - but are likely to encounter rising resistance from industrial and commercial bodies who see this data flow as ‘theirs’. There have been many changes in ICT that has led to stresses in the resolution of the conflicts between IP exploiters and their customers, but this one is of a different scale due to the wide potential for individual customisation of pricing, identification and the rising commercial value of integrated streams of diverse personal data. A new reconciliation between the parties involved is needed. New business models, and a shift in the current confusions over who owns what data into alignments that are in better accord with the community expectations. After all they are the customers, and the emergence of information monopolies needs to be balanced by appropriate consumer/subject rights. This will be a difficult discussion, but one that is needed to realise the great benefits to all that are clearly available if these issues can be positively resolved. The customers need to make these data flow contestable in some form. These Big data flows are only going to grow and become ever more instructive. A better balance is necessary, For the first time these changes are directly affecting governance of democracies, as the very effective micro targeting tools deployed in recent elections have shown. Yet the data gathered is not available to the subjects. This is not a survivable social model. The Private Data Commons needs our help. Businesses and governments exploit big data without regard for issues of legality, data quality, disparate data meanings, and process quality. This often results in poor decisions, with individuals bearing the greatest risk. The threats harbored by big data extend far beyond the individual, however, and call for new legal structures, business processes, and concepts such as a Private Data Commons. This Web extra is the audio part of a video in which author Marcus Wigan expands on his article "Big Data's Big Unintended Consequences" and discusses how businesses and governments exploit big data without regard for issues of legality, data quality, disparate data meanings, and process quality. This often results in poor decisions, with individuals bearing the greatest risk. The threats harbored by big data extend far beyond the individual, however, and call for new legal structures, business processes, and concepts such as a Private Data Commons.