948 resultados para Model driven developments
Resumo:
A growing world population, changing climate and limiting fossil fuels will provide new pressures on human production of food, medicine, fuels and feed stock in the twenty-first century. Enhanced crop production promises to ameliorate these pressures. Crops can be bred for increased yields of calories, starch, nutrients, natural medicinal compounds, and other important products. Enhanced resistance to biotic and abiotic stresses can be introduced, toxins removed, and industrial qualities such as fibre strength and biofuel per mass can be increased. Induced and natural mutations provide a powerful method for the generation of heritable enhanced traits. While mainly exploited in forward, phenotype driven, approaches, the rapid accumulation of plant genomic sequence information and hypotheses regarding gene function allows the use of mutations in reverse genetic approaches to identify lesions in specific target genes. Such gene-driven approaches promise to speed up the process of creating novel phenotypes, and can enable the generation of phenotypes unobtainable by traditional forward methods. TILLING (Targeting Induced Local Lesions IN Genome) is a high-throughput and low cost reverse genetic method for the discovery of induced mutations. The method has been modified for the identification of natural nucleotide polymorphisms, a process called Ecotilling. The methods are general and have been applied to many species, including a variety of different crops. In this chapter the current status of the TILLING and Ecotilling methods and provide an overview of progress in applying these methods to different plant species, with a focus on work related to food production for developing nations.
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
Recent developments in vehicle steering systems offer new opportunities to measure the steering torque and reliably estimate the vehicle sideslip and the tire-road friction coefficient. This paper presents an approach to vehicle stabilization that leverages these estimates to define state boundaries that exclude unstable vehicle dynamics and utilizes a model predictive envelope controller to bound the vehicle motion within this stable region of the state space. This approach provides a large operating region accessible by the driver and smooth interventions at the stability boundaries. Experimental results obtained with a steer-by-wire vehicle and a proof of envelope invariance demonstrate the efficacy of the envelope controller in controlling the vehicle at the limits of handling.
Resumo:
We consider the inertially driven, time-dependent biaxial extensional motion of inviscid and viscous thinning liquid sheets. We present an analytic solution describing the base flow and examine its linear stability to varicose (symmetric) perturbations within the framework of a long-wave model where transient growth and long-time asymptotic stability are considered. The stability of the system is characterized in terms of the perturbation wavenumber, Weber number, and Reynolds number. We find that the isotropic nature of the base flow yields stability results that are identical for axisymmetric and general two-dimensional perturbations. Transient growth of short-wave perturbations at early to moderate times can have significant and lasting influence on the long-time sheet thickness. For finite Reynolds numbers, a radially expanding sheet is weakly unstable with bounded growth of all perturbations, whereas in the inviscid and Stokes flow limits sheets are unstable to perturbations in the short-wave limit.
Resumo:
Organisms provide some of the most sensitive indicators of climate change and evolutionary responses are becoming apparent in species with short generation times. Large datasets on genetic polymorphism that can provide an historical benchmark against which to test for recent evolutionary responses are very rare, but an exception is found in the brown-lipped banded snail (Cepaea nemoralis). This species is sensitive to its thermal environment and exhibits several polymorphisms of shell colour and banding pattern affecting shell albedo in the majority of populations within its native range in Europe. We tested for evolutionary changes in shell albedo that might have been driven by the warming of the climate in Europe over the last half century by compiling an historical dataset for 6,515 native populations of C. nemoralis and comparing this with new data on nearly 3,000 populations. The new data were sampled mainly in 2009 through the Evolution MegaLab, a citizen science project that engaged thousands of volunteers in 15 countries throughout Europe in the biggest such exercise ever undertaken. A known geographic cline in the frequency of the colour phenotype with the highest albedo (yellow) was shown to have persisted and a difference in colour frequency between woodland and more open habitats was confirmed, but there was no general increase in the frequency of yellow shells. This may have been because snails adapted to a warming climate through behavioural thermoregulation. By contrast, we detected an unexpected decrease in the frequency of Unbanded shells and an increase in the Mid-banded morph. Neither of these evolutionary changes appears to be a direct response to climate change, indicating that the influence of other selective agents, possibly related to changing predation pressure and habitat change with effects on micro-climate.
Resumo:
A prototype vortex-driven air lift pump was developed and experimentally evaluated. It was designed to be easily manufactured and scalable for arbitrary riser diameters. The model tested fit in a 2 inch diameter riser with six air injection nozzles through which airwas injected helically around the perimeter of the riser at an angle of 70º from pure tangential injection. The pump was intended to transport both water and sediment over a large range of submergence ratios. A test apparatus was designed to be able to simulate deep water or oceanic environments. The resulting test setup had a finite reservoir; over the course of a test, the submergence ratio varied from 0.48 to 0.39. For air injection pressures ranging from 10 to 60 psig and for air flow rates of 6 to 15 scfm, the induced water discharge flow rates varied only slightly, due to the limited range of available submergence ratios. The anticipated simulation of deep water environment, with a corresponding equivalent increase in thesubmergence ratio, proved unattainable. The pump prototype successfully transported both water and sediment (sand). Thepercent volume yield of the sediment was in an acceptable range. The pump design has been subsequently used successfully in a 4 inch configuration in a follow-on project. A computer program was written in Matlab to simulate the pump characteristics. The program output water pressures at the location of air injection which were physicallycompatible with the experimental data.
Resumo:
Clay mineral-rich sedimentary formations are currently under investigation to evaluate their potential use as host formations for installation of deep underground disposal facilities for radioactive waste (e.g. Boom Clay (BE), Opalinus Clay (CH), Callovo-Oxfordian argillite (FR)). The ultimate safety of the corresponding repository concepts depends largely on the capacity of the host formation to limit the flux towards the biosphere of radionuclides (RN) contained in the waste to acceptably low levels. Data for diffusion-driven transfer in these formations shows extreme differences in the measured or modelled behaviour for various radionuclides, e. g. between halogen RN (Cl-36, I-129) and actinides (U-238,U-235, Np-237, Th-232, etc.), which result from major differences between RN of the effects on transport of two phenomena: diffusion and sorption. This paper describes recent research aimed at improving understanding of these two phenomena, focusing on the results of studies carried out during the EC Funmig IP on clayrocks from the above three formations and from the Boda formation (HU). Project results regarding phenomena governing water, cation and anion distribution and mobility in the pore volumes influenced by the negatively-charged surfaces of clay minerals show a convergence of the modelling results for behaviour at the molecular scale and descriptions based on electrical double layer models. Transport models exist which couple ion distribution relative to the clay-solution interface and differentiated diffusive characteristics. These codes are able to reproduce the main trends in behaviour observed experimentally, e.g. D-e(anion) < D-e(HTO) < D-e(cation) and D-e(anion) variations as a function of ionic strength and material density. These trends are also well-explained by models of transport through ideal porous matrices made up of a charged surface material. Experimental validation of these models is good as regards monovalent alkaline cations, in progress for divalent electrostatically-interacting cations (e.g. Sr2+) and still relatively poor for 'strongly sorbing', high K-d cations. Funmig results have clarified understanding of how clayrock mineral composition, and the corresponding organisation of mineral grain assemblages and their associated porosity, can affect mobile solute (anions, HTO) diffusion at different scales (mm to geological formation). In particular, advances made in the capacity to map clayrock mineral grain-porosity organisation at high resolution provide additional elements for understanding diffusion anisotropy and for relating diffusion characteristics measured at different scales. On the other hand, the results of studies focusing on evaluating the potential effects of heterogeneity on mobile species diffusion at the formation scale tend to show that there is a minimal effect when compared to a homogeneous property model. Finally, the results of a natural tracer-based study carried out on the Opalinus Clay formation increase confidence in the use of diffusion parameters measured on laboratory scale samples for predicting diffusion over geological time-space scales. Much effort was placed on improving understanding of coupled sorption-diffusion phenomena for sorbing cations in clayrocks. Results regarding sorption equilibrium in dispersed and compacted materials for weakly to moderately sorbing cations (Sr2+, Cs+, Co2+) tend to show that the same sorption model probably holds in both systems. It was not possible to demonstrate this for highly sorbing elements such as Eu(III) because of the extremely long times needed to reach equilibrium conditions, but there does not seem to be any clear reason why such elements should not have similar behaviour. Diffusion experiments carried out with Sr2+, Cs+ and Eu(III) on all of the clayrocks gave mixed results and tend to show that coupled diffusion-sorption migration is much more complex than expected, leading generally to greater mobility than that predicted by coupling a batch-determined K-d and Ficks law based on the diffusion behaviour of HTO. If the K-d measured on equivalent dispersed systems holds as was shown to be the case for Sr, Cs (and probably Co) for Opalinus Clay, these results indicate that these cations have a D-e value higher than HTO (up to a factor of 10 for Cs+). Results are as yet very limited for very moderate to strongly sorbing species (e.g. Co(II), Eu(III), Cu(II)) because of their very slow transfer characteristics. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Professor Sir David R. Cox (DRC) is widely acknowledged as among the most important scientists of the second half of the twentieth century. He inherited the mantle of statistical science from Pearson and Fisher, advanced their ideas, and translated statistical theory into practice so as to forever change the application of statistics in many fields, but especially biology and medicine. The logistic and proportional hazards models he substantially developed, are arguably among the most influential biostatistical methods in current practice. This paper looks forward over the period from DRC's 80th to 90th birthdays, to speculate about the future of biostatistics, drawing lessons from DRC's contributions along the way. We consider "Cox's model" of biostatistics, an approach to statistical science that: formulates scientific questions or quantities in terms of parameters gamma in probability models f(y; gamma) that represent in a parsimonious fashion, the underlying scientific mechanisms (Cox, 1997); partition the parameters gamma = theta, eta into a subset of interest theta and other "nuisance parameters" eta necessary to complete the probability distribution (Cox and Hinkley, 1974); develops methods of inference about the scientific quantities that depend as little as possible upon the nuisance parameters (Barndorff-Nielsen and Cox, 1989); and thinks critically about the appropriate conditional distribution on which to base infrences. We briefly review exciting biomedical and public health challenges that are capable of driving statistical developments in the next decade. We discuss the statistical models and model-based inferences central to the CM approach, contrasting them with computationally-intensive strategies for prediction and inference advocated by Breiman and others (e.g. Breiman, 2001) and to more traditional design-based methods of inference (Fisher, 1935). We discuss the hierarchical (multi-level) model as an example of the future challanges and opportunities for model-based inference. We then consider the role of conditional inference, a second key element of the CM. Recent examples from genetics are used to illustrate these ideas. Finally, the paper examines causal inference and statistical computing, two other topics we believe will be central to biostatistics research and practice in the coming decade. Throughout the paper, we attempt to indicate how DRC's work and the "Cox Model" have set a standard of excellence to which all can aspire in the future.
Resumo:
We present a model of spike-driven synaptic plasticity inspired by experimental observations and motivated by the desire to build an electronic hardware device that can learn to classify complex stimuli in a semisupervised fashion. During training, patterns of activity are sequentially imposed on the input neurons, and an additional instructor signal drives the output neurons toward the desired activity. The network is made of integrate-and-fire neurons with constant leak and a floor. The synapses are bistable, and they are modified by the arrival of presynaptic spikes. The sign of the change is determined by both the depolarization and the state of a variable that integrates the postsynaptic action potentials. Following the training phase, the instructor signal is removed, and the output neurons are driven purely by the activity of the input neurons weighted by the plastic synapses. In the absence of stimulation, the synapses preserve their internal state indefinitely. Memories are also very robust to the disruptive action of spontaneous activity. A network of 2000 input neurons is shown to be able to classify correctly a large number (thousands) of highly overlapping patterns (300 classes of preprocessed Latex characters, 30 patterns per class, and a subset of the NIST characters data set) and to generalize with performances that are better than or comparable to those of artificial neural networks. Finally we show that the synaptic dynamics is compatible with many of the experimental observations on the induction of long-term modifications (spike-timing-dependent plasticity and its dependence on both the postsynaptic depolarization and the frequency of pre- and postsynaptic neurons).
Resumo:
In the realm of computer programming, the experience of writing a program is used to reinforce concepts and evaluate ability. This research uses three case studies to evaluate the introduction of testing through Kolb's Experiential Learning Model (ELM). We then analyze the impact of those testing experiences to determine methods for improving future courses. The first testing experience that students encounter are unit test reports in their early courses. This course demonstrates that automating and improving feedback can provide more ELM iterations. The JUnit Generation (JUG) tool also provided a positive experience for the instructor by reducing the overall workload. Later, undergraduate and graduate students have the opportunity to work together in a multi-role Human-Computer Interaction (HCI) course. The interactions use usability analysis techniques with graduate students as usability experts and undergraduate students as design engineers. Students get experience testing the user experience of their product prototypes using methods varying from heuristic analysis to user testing. From this course, we learned the importance of the instructors role in the ELM. As more roles were added to the HCI course, a desire arose to provide more complete, quality assured software. This inspired the addition of unit testing experiences to the course. However, we learned that significant preparations must be made to apply the ELM when students are resistant. The research presented through these courses was driven by the recognition of a need for testing in a Computer Science curriculum. Our understanding of the ELM suggests the need for student experience when being introduced to testing concepts. We learned that experiential learning, when appropriately implemented, can provide benefits to the Computer Science classroom. When examined together, these course-based research projects provided insight into building strong testing practices into a curriculum.
Resumo:
BACKGROUND: Reperfusion injury is insufficiently addressed in current clinical management of acute limb ischemia. Controlled reperfusion carries an enormous clinical potential and was tested in a new reality-driven rodent model. METHODS AND RESULTS: Acute hind-limb ischemia was induced in Wistar rats and maintained for 4 hours. Unlike previous tourniquets models, femoral vessels were surgically prepared to facilitate controlled reperfusion and to prevent venous stasis. Rats were randomized into an experimental group (n=7), in which limbs were selectively perfused with a cooled isotone heparin solution at a limited flow rate before blood flow was restored, and a conventional group (n=7; uncontrolled blood reperfusion). Rats were killed 4 hours after blood reperfusion. Nonischemic limbs served as controls. Ischemia/reperfusion injury was significant in both groups; total wet-to-dry ratio was 159+/-44% of normal (P=0.016), whereas muscle viability and contraction force were reduced to 65+/-13% (P=0.016) and 45+/-34% (P=0.045), respectively. Controlled reperfusion, however, attenuated reperfusion injury significantly. Tissue edema was less pronounced (132+/-16% versus 185+/-42%; P=0.011) and muscle viability (74+/-11% versus 57+/-9%; P=0.004) and contraction force (68+/-40% versus 26+/-7%; P=0.045) were better preserved than after uncontrolled reperfusion. Moreover, subsequent blood circulation as assessed by laser Doppler recovered completely after controlled reperfusion but stayed durably impaired after uncontrolled reperfusion (P=0.027). CONCLUSIONS: Reperfusion injury was significantly alleviated by basic modifications of the initial reperfusion period in a new in vivo model of acute limb ischemia. With this model, systematic optimizations of according protocols may eventually translate into improved clinical management of acute limb ischemia.
Resumo:
Wind energy has been one of the most growing sectors of the nation’s renewable energy portfolio for the past decade, and the same tendency is being projected for the upcoming years given the aggressive governmental policies for the reduction of fossil fuel dependency. Great technological expectation and outstanding commercial penetration has shown the so called Horizontal Axis Wind Turbines (HAWT) technologies. Given its great acceptance, size evolution of wind turbines over time has increased exponentially. However, safety and economical concerns have emerged as a result of the newly design tendencies for massive scale wind turbine structures presenting high slenderness ratios and complex shapes, typically located in remote areas (e.g. offshore wind farms). In this regard, safety operation requires not only having first-hand information regarding actual structural dynamic conditions under aerodynamic action, but also a deep understanding of the environmental factors in which these multibody rotating structures operate. Given the cyclo-stochastic patterns of the wind loading exerting pressure on a HAWT, a probabilistic framework is appropriate to characterize the risk of failure in terms of resistance and serviceability conditions, at any given time. Furthermore, sources of uncertainty such as material imperfections, buffeting and flutter, aeroelastic damping, gyroscopic effects, turbulence, among others, have pleaded for the use of a more sophisticated mathematical framework that could properly handle all these sources of indetermination. The attainable modeling complexity that arises as a result of these characterizations demands a data-driven experimental validation methodology to calibrate and corroborate the model. For this aim, System Identification (SI) techniques offer a spectrum of well-established numerical methods appropriated for stationary, deterministic, and data-driven numerical schemes, capable of predicting actual dynamic states (eigenrealizations) of traditional time-invariant dynamic systems. As a consequence, it is proposed a modified data-driven SI metric based on the so called Subspace Realization Theory, now adapted for stochastic non-stationary and timevarying systems, as is the case of HAWT’s complex aerodynamics. Simultaneously, this investigation explores the characterization of the turbine loading and response envelopes for critical failure modes of the structural components the wind turbine is made of. In the long run, both aerodynamic framework (theoretical model) and system identification (experimental model) will be merged in a numerical engine formulated as a search algorithm for model updating, also known as Adaptive Simulated Annealing (ASA) process. This iterative engine is based on a set of function minimizations computed by a metric called Modal Assurance Criterion (MAC). In summary, the Thesis is composed of four major parts: (1) development of an analytical aerodynamic framework that predicts interacted wind-structure stochastic loads on wind turbine components; (2) development of a novel tapered-swept-corved Spinning Finite Element (SFE) that includes dampedgyroscopic effects and axial-flexural-torsional coupling; (3) a novel data-driven structural health monitoring (SHM) algorithm via stochastic subspace identification methods; and (4) a numerical search (optimization) engine based on ASA and MAC capable of updating the SFE aerodynamic model.
Resumo:
BACKGROUND: Wheezing disorders in childhood vary widely in clinical presentation and disease course. During the last years, several ways to classify wheezing children into different disease phenotypes have been proposed and are increasingly used for clinical guidance, but validation of these hypothetical entities is difficult. METHODOLOGY/PRINCIPAL FINDINGS: The aim of this study was to develop a testable disease model which reflects the full spectrum of wheezing illness in preschool children. We performed a qualitative study among a panel of 7 experienced clinicians from 4 European countries working in primary, secondary and tertiary paediatric care. In a series of questionnaire surveys and structured discussions, we found a general consensus that preschool wheezing disorders consist of several phenotypes, with a great heterogeneity of specific disease concepts between clinicians. Initially, 24 disease entities were described among the 7 physicians. In structured discussions, these could be narrowed down to three entities which were linked to proposed mechanisms: a) allergic wheeze, b) non-allergic wheeze due to structural airway narrowing and c) non-allergic wheeze due to increased immune response to viral infections. This disease model will serve to create an artificial dataset that allows the validation of data-driven multidimensional methods, such as cluster analysis, which have been proposed for identification of wheezing phenotypes in children. CONCLUSIONS/SIGNIFICANCE: While there appears to be wide agreement among clinicians that wheezing disorders consist of several diseases, there is less agreement regarding their number and nature. A great diversity of disease concepts exist but a unified phenotype classification reflecting underlying disease mechanisms is lacking. We propose a disease model which may help guide future research so that proposed mechanisms are measured at the right time and their role in disease heterogeneity can be studied.
Resumo:
We describe four recent additions to NEURON's suite of graphical tools that make it easier for users to create and manage models: an enhancement to the Channel Builder that facilitates the specification and efficient simulation of stochastic channel models
Resumo:
Following the end of a half-century of Soviet occupation, Lithuania, like other former Soviet republics, has been in socio-economic disorder. Now that Lithuania is free, the system of social welfare is characterized by under-funded health services and pensions, and a large number of institutions. Semi-structured interviews were conducted with students and practitioners focusing on community development, using Lofland’s model of social setting analysis.Results indicate that the collaborative efforts successfully produced a revolutionary and successful social service program, a multi-generational living facility offering full-time social services to unwed mothers, infants, and elderly residents.This article is based upon the qualitative study of social work practitioners and social work students and chronicles the successes and difficulties encountered within the process of community development.