914 resultados para Model transformation analysis
Resumo:
A total of 46,089 individual monthly test-day (TD) milk yields (10 test-days), from 7,331 complete first lactations of Holstein cattle were analyzed. A standard multivariate analysis (MV), reduced rank analyses fitting the first 2, 3, and 4 genetic principal components (PC2, PC3, PC4), and analyses that fitted a factor analytic structure considering 2, 3, and 4 factors (FAS2, FAS3, FAS4), were carried out. The models included the random animal genetic effect and fixed effects of the contemporary groups (herd-year-month of test-day), age of cow (linear and quadratic effects), and days in milk (linear effect). The residual covariance matrix was assumed to have full rank. Moreover, 2 random regression models were applied. Variance components were estimated by restricted maximum likelihood method. The heritability estimates ranged from 0.11 to 0.24. The genetic correlation estimates between TD obtained with the PC2 model were higher than those obtained with the MV model, especially on adjacent test-days at the end of lactation close to unity. The results indicate that for the data considered in this study, only 2 principal components are required to summarize the bulk of genetic variation among the 10 traits.
Resumo:
This paper considers likelihood-based inference for the family of power distributions. Widely applicable results are presented which can be used to conduct inference for all three parameters of the general location-scale extension of the family. More specific results are given for the special case of the power normal model. The analysis of a large data set, formed from density measurements for a certain type of pollen, illustrates the application of the family and the results for likelihood-based inference. Throughout, comparisons are made with analogous results for the direct parametrisation of the skew-normal distribution.
Resumo:
Herein, it was investigated for the first time the electro-oxidation of ethanol on Pt and PtRu electrodeposits in acidic media by using in situ surface enhanced infrared absorption spectroscopy with attenuated total reflection (ATR-SEIRAS). The experimental setup circumvents the weak absorbance signals related to adsorbed species, usually observed for rough, electrodeposited surfaces, and allows a full description of the CO coverage with the potential for both catalysts. The dynamics of adsorption-oxidation of CO was accessed by ATR-SEIRAS experiments (involving four ethanol concentrations) and correlated with expressions derived from a simple kinetic model. Kinetic analysis suggests that the growing of the CO adsorbed layer is nor influenced by the presence of Ru neither by the concentration of ethanol. The results suggest that the C-C scission is not related to the presence of Ru and probably happens at Pt sites.
Resumo:
A recent initiative of the European Space Agency (ESA) aims at the definition and adoption of a software reference architecture for use in on-board software of future space missions. Our PhD project placed in the context of that effort. At the outset of our work we gathered all the industrial needs relevant to ESA and all the main European space stakeholders and we were able to consolidate a set of technical high-level requirements for the fulfillment of them. The conclusion we reached from that phase confirmed that the adoption of a software reference architecture was indeed the best solution for the fulfillment of the high-level requirements. The software reference architecture we set on building rests on four constituents: (i) a component model, to design the software as a composition of individually verifiable and reusable software units; (ii) a computational model, to ensure that the architectural description of the software is statically analyzable; (iii) a programming model, to ensure that the implementation of the design entities conforms with the semantics, the assumptions and the constraints of the computational model; (iv) a conforming execution platform, to actively preserve at run time the properties asserted by static analysis. The nature, feasibility and fitness of constituents (ii), (iii) and (iv), were already proved by the author in an international project that preceded the commencement of the PhD work. The core of the PhD project was therefore centered on the design and prototype implementation of constituent (i), a component model. Our proposed component model is centered on: (i) rigorous separation of concerns, achieved with the support for design views and by careful allocation of concerns to the dedicated software entities; (ii) the support for specification and model-based analysis of extra-functional properties; (iii) the inclusion space-specific concerns.
Resumo:
Summary PhD Thesis Jan Pollmann: This thesis focuses on global scale measurements of light reactive non-methane hydrocarbon (NMHC), in the volatility range from ethane to toluene with a special focus on ethane, propane, isobutane, butane, isopentane and pentane. Even though they only occur at the ppt level (nmol mol-1) in the remote troposphere these species can yield insight into key atmospheric processes. An analytical method was developed and subsequently evaluated to analyze NMHC from the NOAA – ERSL cooperative air sampling network. Potential analytical interferences through other atmospheric trace gases (water vapor and ozone) were carefully examined. The analytical parameters accuracy and precision were analyzed in detail. It was proven that more than 90% of the data points meet the Global Atmospheric Watch (GAW) data quality objective. Trace gas measurements from 28 measurement stations were used to derive the global atmospheric distribution profile for 4 NMHC (ethane, propane, isobutane, butane). A close comparison of the derived ethane data with previously published reports showed that northern hemispheric ethane background mixing ratio declined by approximately 30% since 1990. No such change was observed for southern hemispheric ethane. The NMHC data and trace gas data supplied by NOAA ESRL were used to estimate local diurnal averaged hydroxyl radical (OH) mixing ratios by variability analysis. Comparison of the variability derived OH with directly measured OH and modeled OH mixing ratios were found in good agreement outside the tropics. Tropical OH was on average two times higher than predicted by the model. Variability analysis was used to assess the effect of chlorine radicals on atmospheric oxidation chemistry. It was found that Cl is probably not of significant relevance on a global scale.
Resumo:
We study QCD with twelve light flavors at intermediate values of the bare lattice coupling. We contrast the results for the order parameter with different theoretical models motivated by the physics of the Goldstone phase and of the symmetric phase, and we perform a model independent analysis of the meson spectrum inspired by universal properties of chiral symmetry. Our analysis favors chiral symmetry restoration.
Resumo:
Functional Magnetic Resonance Imaging (fMRI) is a non-invasive technique which is commonly used to quantify changes in blood oxygenation and flow coupled to neuronal activation. One of the primary goals of fMRI studies is to identify localized brain regions where neuronal activation levels vary between groups. Single voxel t-tests have been commonly used to determine whether activation related to the protocol differs across groups. Due to the generally limited number of subjects within each study, accurate estimation of variance at each voxel is difficult. Thus, combining information across voxels in the statistical analysis of fMRI data is desirable in order to improve efficiency. Here we construct a hierarchical model and apply an Empirical Bayes framework on the analysis of group fMRI data, employing techniques used in high throughput genomic studies. The key idea is to shrink residual variances by combining information across voxels, and subsequently to construct an improved test statistic in lieu of the classical t-statistic. This hierarchical model results in a shrinkage of voxel-wise residual sample variances towards a common value. The shrunken estimator for voxelspecific variance components on the group analyses outperforms the classical residual error estimator in terms of mean squared error. Moreover, the shrunken test-statistic decreases false positive rate when testing differences in brain contrast maps across a wide range of simulation studies. This methodology was also applied to experimental data regarding a cognitive activation task.
Resumo:
Civil infrastructure provides essential services for the development of both society and economy. It is very important to manage systems efficiently to ensure sound performance. However, there are challenges in information extraction from available data, which also necessitates the establishment of methodologies and frameworks to assist stakeholders in the decision making process. This research proposes methodologies to evaluate systems performance by maximizing the use of available information, in an effort to build and maintain sustainable systems. Under the guidance of problem formulation from a holistic view proposed by Mukherjee and Muga, this research specifically investigates problem solving methods that measure and analyze metrics to support decision making. Failures are inevitable in system management. A methodology is developed to describe arrival pattern of failures in order to assist engineers in failure rescues and budget prioritization especially when funding is limited. It reveals that blockage arrivals are not totally random. Smaller meaningful subsets show good random behavior. Additional overtime failure rate is analyzed by applying existing reliability models and non-parametric approaches. A scheme is further proposed to depict rates over the lifetime of a given facility system. Further analysis of sub-data sets is also performed with the discussion of context reduction. Infrastructure condition is another important indicator of systems performance. The challenges in predicting facility condition are the transition probability estimates and model sensitivity analysis. Methods are proposed to estimate transition probabilities by investigating long term behavior of the model and the relationship between transition rates and probabilities. To integrate heterogeneities, model sensitivity is performed for the application of non-homogeneous Markov chains model. Scenarios are investigated by assuming transition probabilities follow a Weibull regressed function and fall within an interval estimate. For each scenario, multiple cases are simulated using a Monte Carlo simulation. Results show that variations on the outputs are sensitive to the probability regression. While for the interval estimate, outputs have similar variations to the inputs. Life cycle cost analysis and life cycle assessment of a sewer system are performed comparing three different pipe types, which are reinforced concrete pipe (RCP) and non-reinforced concrete pipe (NRCP), and vitrified clay pipe (VCP). Life cycle cost analysis is performed for material extraction, construction and rehabilitation phases. In the rehabilitation phase, Markov chains model is applied in the support of rehabilitation strategy. In the life cycle assessment, the Economic Input-Output Life Cycle Assessment (EIO-LCA) tools are used in estimating environmental emissions for all three phases. Emissions are then compared quantitatively among alternatives to support decision making.
Resumo:
BACKGROUND Among children with wheeze and recurrent cough there is great variation in clinical presentation and time course of the disease. We previously distinguished 5 phenotypes of wheeze and cough in early childhood by applying latent class analysis to longitudinal data from a population-based cohort (original cohort). OBJECTIVE To validate previously identified phenotypes of childhood cough and wheeze in an independent cohort. METHODS We included 903 children reporting wheeze or recurrent cough from an independent population-based cohort (validation cohort). As in the original cohort, we used latent class analysis to identify phenotypes on the basis of symptoms of wheeze and cough at 2 time points (preschool and school age) and objective measurements of atopy, lung function, and airway responsiveness (school age). Prognostic outcomes (wheeze, bronchodilator use, cough apart from colds) 5 years later were compared across phenotypes. RESULTS When using a 5-phenotype model, the analysis distinguished 3 phenotypes of wheeze and 2 of cough as in the original cohort. Two phenotypes were closely similar in both cohorts: Atopic persistent wheeze (persistent multiple trigger wheeze and chronic cough, atopy and reduced lung function, poor prognosis) and transient viral wheeze (early-onset transient wheeze with viral triggers, favorable prognosis). The other phenotypes differed more between cohorts. These differences might be explained by differences in age at measurements. CONCLUSIONS Applying the same method to 2 different cohorts, we consistently identified 2 phenotypes of wheeze (atopic persistent wheeze, transient viral wheeze), suggesting that these represent distinct disease processes. Differences found in other phenotypes suggest that the age when features are assessed is critical and should be considered carefully when defining phenotypes.
Resumo:
A geometrical force balance that links stresses to ice bed coupling along a flow band of an ice sheet was developed in 1988 for longitudinal tension in ice streams and published 4 years later. It remains a work in progress. Now gravitational forces balanced by forces producing tensile, compressive, basal shear, and side shear stresses are all linked to ice bed coupling by the floating fraction phi of ice that produces the concave surface of ice streams. These lead inexorably to a simple formula showing how phi varies along these flow bands where surface and bed topography are known: phi = h(O)/h(I) with h(O) being ice thickness h(I) at x = 0 for x horizontal and positive upslope from grounded ice margins. This captures the basic fact in glaciology: the height of ice depends on how strongly ice couples to the bed. It shows how far a high convex ice sheet (phi = 0) has gone in collapsing into a low flat ice shelf (phi = 1). Here phi captures ice bed coupling under an ice stream and h(O) captures ice bed coupling beyond ice streams.
Resumo:
Empirische Studien zeigen, dass junge Frauen in der Schweiz mehr Mühe in der Lehrstellensuche bekunden als Männer. Die Studie überprüft die Hypothesen, dass dies auch in einem geringeren Angebot an Lehrberufen und Lehrstellen sowie tendenziell höheren schulischen Anforderungen in ihren typischen Interessensbereichen im Vergleich zu den typischen Interessensbereichen von Männern begründet liegt. Dazu wird eine typologische Analyse des Schweizer Lehrstellenmarktes 2006 aufgrund der RIASEC Typologie von Holland (1997) vorgenommen und in direkten Bezug zur gut fundierten beruflichen Interessenforschung gestellt. Beide Hypothesen wurden bestätigt. Implikationen für Theorie und Praxis werden beschrieben.
Resumo:
Abstract Context: Mammary and placental 17β-hydroxysteroid dehydrogenase type 1 (17βHSD1). Objective: To assess the impact of testosterone, tibolone, and black cohosh on purified mammary and placental 17βHSD1. Materials and methods: 17βHSD1 was purified from human mammary gland and placenta by column chromatography, its activity was monitored by a radioactive activity assay, and the degree of purification was determined by gel electrophoresis. Photometric cofactor transformation analysis was performed to assess 17βHSD1 activity without or in presence of testosterone, tibolone and black cohosh. Results: 17βHSD1 from both sources displayed a comparable basal activity. Testosterone and tibolone metabolites inhibited purified mammary and placental 17βHSD1 activity to a different extent, whereas black cohosh had no impact. Discussion: Studies on purified enzymes reveal the individual action of drugs on local regulatory mechanisms thus helping to develop more targeted therapeutic intervention. Conclusion: Testosterone, tibolone and black cohosh display a beneficial effect on local mammary estrogen metabolism by not affecting or decreasing local estradiol exposure.
Resumo:
The physical processes controlling the mixed layer salinity (MLS) seasonal budget in the tropical Atlantic Ocean are investigated using a regional configuration of an ocean general circulation model. The analysis reveals that the MLS cycle is generally weak in comparison of individual physical processes entering in the budget because of strong compensation. In evaporative regions, around the surface salinity maxima, the ocean acts to freshen the mixed layer against the action of evaporation. Poleward of the southern SSS maxima, the freshening is ensured by geostrophic advection, the vertical salinity diffusion and, during winter, a dominant contribution of the convective entrainment. On the equatorward flanks of the SSS maxima, Ekman transport mainly contributes to supply freshwater from ITCZ regions while vertical salinity diffusion adds on the effect of evaporation. All these terms are phase locked through the effect of the wind. Under the seasonal march of the ITCZ and in coastal areas affected by river (7°S:15°N), the upper ocean freshening by precipitations and/or runoff is attenuated by vertical salinity diffusion. In the eastern equatorial regions, seasonal cycle of wind forced surface currents advect freshwaters, which are mixed with subsurface saline water because of the strong vertical turbulent diffusion. In all these regions, the vertical diffusion presents an important contribution to the MLS budget by providing, in general, an upwelling flux of salinity. It is generally due to vertical salinity gradient and mixing due to winds. Furthermore, in the equator where the vertical shear, associated to surface horizontal currents, is developed, the diffusion depends also on the sheared flow stability.
Resumo:
The ability to determine what activity of daily living a person performs is of interest in many application domains. It is possible to determine the physical and cognitive capabilities of the elderly by inferring what activities they perform in their houses. Our primary aim was to establish a proof of concept that a wireless sensor system can monitor and record physical activity and these data can be modeled to predict activities of daily living. The secondary aim was to determine the optimal placement of the sensor boxes for detecting activities in a room. A wireless sensor system was set up in a laboratory kitchen. The ten healthy participants were requested to make tea following a defined sequence of tasks. Data were collected from the eight wireless sensor boxes placed in specific places in the test kitchen and analyzed to detect the sequences of tasks performed by the participants. These sequence of tasks were trained and tested using the Markov Model. Data analysis focused on the reliability of the system and the integrity of the collected data. The sequence of tasks were successfully recognized for all subjects and the averaged data pattern of tasks sequences between the subjects had a high correlation. Analysis of the data collected indicates that sensors placed in different locations are capable of recognizing activities, with the movement detection sensor contributing the most to detection of tasks. The central top of the room with no obstruction of view was considered to be the best location to record data for activity detection. Wireless sensor systems show much promise as easily deployable to monitor and recognize activities of daily living.
Resumo:
The most common pattern of classroom discourse follows a three-part exchange of teacher initiation, student response, and teacher evaluation or follow-up (IRE/IRF) (Cazden, 2001). Although sometimes described as encouraging illusory understanding (Lemke, 1990), triadic exchanges can mediate meaning (Nassaji & Wells, 2000). This paper focuses on one case from a study of discursive practices of seven middle grades teachers identified for their expertise in mathematics instruction. The central result of the study was the development of a model to explain how teachers use discourse to mediate mathematical meaning in whole group instruction. Drawing on the model for analysis, thick descriptions of one teacher’s skillful orchestration of triadic exchanges that enhance student understanding of mathematics are presented.