53 resultados para Model Based Development


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The structure of turbulence in the ocean surface layer is investigated using a simplified semi-analytical model based on rapid-distortion theory. In this model, which is linear with respect to the turbulence, the flow comprises a mean Eulerian shear current, the Stokes drift of an irrotational surface wave, which accounts for the irreversible effect of the waves on the turbulence, and the turbulence itself, whose time evolution is calculated. By analysing the equations of motion used in the model, which are linearised versions of the Craik–Leibovich equations containing a ‘vortex force’, it is found that a flow including mean shear and a Stokes drift is formally equivalent to a flow including mean shear and rotation. In particular, Craik and Leibovich’s condition for the linear instability of the first kind of flow is equivalent to Bradshaw’s condition for the linear instability of the second. However, the present study goes beyond linear stability analyses by considering flow disturbances of finite amplitude, which allows calculating turbulence statistics and addressing cases where the linear stability is neutral. Results from the model show that the turbulence displays a structure with a continuous variation of the anisotropy and elongation, ranging from streaky structures, for distortion by shear only, to streamwise vortices resembling Langmuir circulations, for distortion by Stokes drift only. The TKE grows faster for distortion by a shear and a Stokes drift gradient with the same sign (a situation relevant to wind waves), but the turbulence is more isotropic in that case (which is linearly unstable to Langmuir circulations).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Acrylamide is formed from reducing sugars and asparagine during the preparation of French fries. The commercial preparation of French fries is a multistage process involving the preparation of frozen, par-fried potato strips for distribution to catering outlets, where they are finish-fried. The initial blanching, treatment in glucose solution, and par-frying steps are crucial because they determine the levels of precursors present at the beginning of the finish-frying process. To minimize the quantities of acrylamide in cooked fries, it is important to understand the impact of each stage on the formation of acrylamide. Acrylamide, amino acids, sugars, moisture, fat, and color were monitored at time intervals during the frying of potato strips that had been dipped in various concentrations of glucose and fructose during a typical pretreatment. A mathematical model based on the fundamental chemical reaction pathways of the finish-frying was developed, incorporating moisture and temperature gradients in the fries. This showed the contribution of both glucose and fructose to the generation of acrylamide and accurately predicted the acrylamide content of the final fries.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This research presents a novel multi-functional system for medical Imaging-enabled Assistive Diagnosis (IAD). Although the IAD demonstrator has focused on abdominal images and supports the clinical diagnosis of kidneys using CT/MRI imaging, it can be adapted to work on image delineation, annotation and 3D real-size volumetric modelling of other organ structures such as the brain, spine, etc. The IAD provides advanced real-time 3D visualisation and measurements with fully automated functionalities as developed in two stages. In the first stage, via the clinically driven user interface, specialist clinicians use CT/MRI imaging datasets to accurately delineate and annotate the kidneys and their possible abnormalities, thus creating “3D Golden Standard Models”. Based on these models, in the second stage, clinical support staff i.e. medical technicians interactively define model-based rules and parameters for the integrated “Automatic Recognition Framework” to achieve results which are closest to that of the clinicians. These specific rules and parameters are stored in “Templates” and can later be used by any clinician to automatically identify organ structures i.e. kidneys and their possible abnormalities. The system also supports the transmission of these “Templates” to another expert for a second opinion. A 3D model of the body, the organs and their possible pathology with real metrics is also integrated. The automatic functionality was tested on eleven MRI datasets (comprising of 286 images) and the 3D models were validated by comparing them with the metrics from the corresponding “3D Golden Standard Models”. The system provides metrics for the evaluation of the results, in terms of Accuracy, Precision, Sensitivity, Specificity and Dice Similarity Coefficient (DSC) so as to enable benchmarking of its performance. The first IAD prototype has produced promising results as its performance accuracy based on the most widely deployed evaluation metric, DSC, yields 97% for the recognition of kidneys and 96% for their abnormalities; whilst across all the above evaluation metrics its performance ranges between 96% and 100%. Further development of the IAD system is in progress to extend and evaluate its clinical diagnostic support capability through development and integration of additional algorithms to offer fully computer-aided identification of other organs and their abnormalities based on CT/MRI/Ultra-sound Imaging.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aerosols affect the Earth's energy budget directly by scattering and absorbing radiation and indirectly by acting as cloud condensation nuclei and, thereby, affecting cloud properties. However, large uncertainties exist in current estimates of aerosol forcing because of incomplete knowledge concerning the distribution and the physical and chemical properties of aerosols as well as aerosol-cloud interactions. In recent years, a great deal of effort has gone into improving measurements and datasets. It is thus feasible to shift the estimates of aerosol forcing from largely model-based to increasingly measurement-based. Our goal is to assess current observational capabilities and identify uncertainties in the aerosol direct forcing through comparisons of different methods with independent sources of uncertainties. Here we assess the aerosol optical depth (τ), direct radiative effect (DRE) by natural and anthropogenic aerosols, and direct climate forcing (DCF) by anthropogenic aerosols, focusing on satellite and ground-based measurements supplemented by global chemical transport model (CTM) simulations. The multi-spectral MODIS measures global distributions of aerosol optical depth (τ) on a daily scale, with a high accuracy of ±0.03±0.05τ over ocean. The annual average τ is about 0.14 over global ocean, of which about 21%±7% is contributed by human activities, as estimated by MODIS fine-mode fraction. The multi-angle MISR derives an annual average AOD of 0.23 over global land with an uncertainty of ~20% or ±0.05. These high-accuracy aerosol products and broadband flux measurements from CERES make it feasible to obtain observational constraints for the aerosol direct effect, especially over global the ocean. A number of measurement-based approaches estimate the clear-sky DRE (on solar radiation) at the top-of-atmosphere (TOA) to be about -5.5±0.2 Wm-2 (median ± standard error from various methods) over the global ocean. Accounting for thin cirrus contamination of the satellite derived aerosol field will reduce the TOA DRE to -5.0 Wm-2. Because of a lack of measurements of aerosol absorption and difficulty in characterizing land surface reflection, estimates of DRE over land and at the ocean surface are currently realized through a combination of satellite retrievals, surface measurements, and model simulations, and are less constrained. Over the oceans the surface DRE is estimated to be -8.8±0.7 Wm-2. Over land, an integration of satellite retrievals and model simulations derives a DRE of -4.9±0.7 Wm-2 and -11.8±1.9 Wm-2 at the TOA and surface, respectively. CTM simulations derive a wide range of DRE estimates that on average are smaller than the measurement-based DRE by about 30-40%, even after accounting for thin cirrus and cloud contamination. A number of issues remain. Current estimates of the aerosol direct effect over land are poorly constrained. Uncertainties of DRE estimates are also larger on regional scales than on a global scale and large discrepancies exist between different approaches. The characterization of aerosol absorption and vertical distribution remains challenging. The aerosol direct effect in the thermal infrared range and in cloudy conditions remains relatively unexplored and quite uncertain, because of a lack of global systematic aerosol vertical profile measurements. A coordinated research strategy needs to be developed for integration and assimilation of satellite measurements into models to constrain model simulations. Enhanced measurement capabilities in the next few years and high-level scientific cooperation will further advance our knowledge.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose – Characteristics of leaders whose behaviour is visceral include taking action based on instinct rather than intellect and exhibiting coarse, base and often negative emotions. Despite the challenge of precisely defining the nature of visceral behaviour, the purpose of this paper is to provide insight into this less attractive side of boardroom life. Design/methodology/approach – Following a literature review of the research into the negative behaviour leaders exhibit, the paper highlights four forms of visceral behaviour based on focused and intimate qualitative case studies involving the experiences of those on the receiving end of that behaviour within a boardroom context. Findings – Based on interviews with an international sample of five chief executive officers (CEOs), plus three subordinates with substantial profit and loss responsibility, the study reveals a distinctly human experience from which no one is exempt. The idiosyncratic nature of the visceral behaviour experienced resulted in each study participant's unique experience. The authors conclude that leaders need to adopt specific measures in order to control and reduce the darker human tendencies. Research limitations/implications – The experiences of study participants are presented in four case studies, providing insight into their experiences whilst also protecting their identity. The study participants were drawn from a sample of companies operating globally within a single sector of the manufacturing industry. The concepts the authors present require validating in other organisations with different demographic profiles. Originality/value – The paper presents a model based on two dimensions – choice and level of mastery – that provides the reader with insight into the forms of visceral behaviour to which leaders succumb. Insight enables us to offer managers strategic suggestions to guard against visceral behaviour and assist them in mitigating its worst aspects, in both those with whom they work and themselves.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Classical regression methods take vectors as covariates and estimate the corresponding vectors of regression parameters. When addressing regression problems on covariates of more complex form such as multi-dimensional arrays (i.e. tensors), traditional computational models can be severely compromised by ultrahigh dimensionality as well as complex structure. By exploiting the special structure of tensor covariates, the tensor regression model provides a promising solution to reduce the model’s dimensionality to a manageable level, thus leading to efficient estimation. Most of the existing tensor-based methods independently estimate each individual regression problem based on tensor decomposition which allows the simultaneous projections of an input tensor to more than one direction along each mode. As a matter of fact, multi-dimensional data are collected under the same or very similar conditions, so that data share some common latent components but can also have their own independent parameters for each regression task. Therefore, it is beneficial to analyse regression parameters among all the regressions in a linked way. In this paper, we propose a tensor regression model based on Tucker Decomposition, which identifies not only the common components of parameters across all the regression tasks, but also independent factors contributing to each particular regression task simultaneously. Under this paradigm, the number of independent parameters along each mode is constrained by a sparsity-preserving regulariser. Linked multiway parameter analysis and sparsity modeling further reduce the total number of parameters, with lower memory cost than their tensor-based counterparts. The effectiveness of the new method is demonstrated on real data sets.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

How tropical cyclone (TC) activity in the northwestern Pacific might change in a future climate is assessed using multidecadal Atmospheric Model Intercomparison Project (AMIP)-style and time-slice simulations with the ECMWF Integrated Forecast System (IFS) at 16-km and 125-km global resolution. Both models reproduce many aspects of the present-day TC climatology and variability well, although the 16-km IFS is far more skillful in simulating the full intensity distribution and genesis locations, including their changes in response to El Niño–Southern Oscillation. Both IFS models project a small change in TC frequency at the end of the twenty-first century related to distinct shifts in genesis locations. In the 16-km IFS, this shift is southward and is likely driven by the southeastward penetration of the monsoon trough/subtropical high circulation system and the southward shift in activity of the synoptic-scale tropical disturbances in response to the strengthening of deep convective activity over the central equatorial Pacific in a future climate. The 16-km IFS also projects about a 50% increase in the power dissipation index, mainly due to significant increases in the frequency of the more intense storms, which is comparable to the natural variability in the model. Based on composite analysis of large samples of supertyphoons, both the development rate and the peak intensities of these storms increase in a future climate, which is consistent with their tendency to develop more to the south, within an environment that is thermodynamically more favorable for faster development and higher intensities. Coherent changes in the vertical structure of supertyphoon composites show system-scale amplification of the primary and secondary circulations with signs of contraction, a deeper warm core, and an upward shift in the outflow layer and the frequency of the most intense updrafts. Considering the large differences in the projections of TC intensity change between the 16-km and 125-km IFS, this study further emphasizes the need for high-resolution modeling in assessing potential changes in TC activity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

At the beginning of the Medieval Climate Anomaly, in the ninth and tenth century, the medieval eastern Roman empire, more usually known as Byzantium, was recovering from its early medieval crisis and experiencing favourable climatic conditions for the agricultural and demographic growth. Although in the Balkans and Anatolia such favourable climate conditions were prevalent during the eleventh century, parts of the imperial territories were facing significant challenges as a result of external political/military pressure. The apogee of medieval Byzantine socio-economic development, around AD 1150, coincides with a period of adverse climatic conditions for its economy, so it becomes obvious that the winter dryness and high climate variability at this time did not hinder Byzantine society and economy from achieving that level of expansion. Soon after this peak, towards the end of the twelfth century, the populations of the Byzantine world were experiencing unusual climatic conditions with marked dryness and cooler phases. The weakened Byzantine socio-political system must have contributed to the events leading to the fall of Constantinople in AD 1204 and the sack of the city. The final collapse of the Byzantine political control over western Anatolia took place half century later, thus contemporaneous with the strong cooling effect after a tropical volcanic eruption in AD 1257. We suggest that, regardless of a range of other influential factors, climate change was also an important contributing factor to the socio-economic changes that took place in Byzantium during the Medieval Climate Anomaly. Crucially, therefore, while the relatively sophisticated and complex Byzantine society was certainly influenced by climatic conditions, and while it nevertheless displayed a significant degree of resilience, external pressures as well as tensions within the Byzantine society more broadly contributed to an increasing vulnerability in respect of climate impacts. Our interdisciplinary analysis is based on all available sources of information on the climate and society of Byzantium, that is textual (documentary), archaeological, environmental, climate and climate model-based evidence about the nature and extent of climate variability in the eastern Mediterranean. The key challenge was, therefore, to assess the relative influence to be ascribed to climate variability and change on the one hand, and on the other to the anthropogenic factors in the evolution of Byzantine state and society (such as invasions, changes in international or regional market demand and patterns of production and consumption, etc.). The focus of this interdisciplinary