892 resultados para Towards Seamless Integration of Geoscience Models and Data


Relevância:

100.00% 100.00%

Publicador:

Resumo:

An important safety aspect to be considered when foods are enriched with phytosterols and phytostanols is the oxidative stability of these lipid compounds, i.e. their resistance to oxidation and thus to the formation of oxidation products. This study concentrated on producing scientific data to support this safety evaluation process. In the absence of an official method for analyzing of phytosterol/stanol oxidation products, we first developed a new gas chromatographic - mass spectrometric (GC-MS) method. We then investigated factors affecting these compounds' oxidative stability in lipid-based food models in order to identify critical conditions under which significant oxidation reactions may occur. Finally, the oxidative stability of phytosterols and stanols in enriched foods during processing and storage was evaluated. Enriched foods covered a range of commercially available phytosterol/stanol ingredients, different heat treatments during food processing, and different multiphase food structures. The GC-MS method was a powerful tool for measuring the oxidative stability. Data obtained in food model studies revealed that the critical factors for the formation and distribution of the main secondary oxidation products were sterol structure, reaction temperature, reaction time, and lipid matrix composition. Under all conditions studied, phytostanols as saturated compounds were more stable than unsaturated phytosterols. In addition, esterification made phytosterols more reactive than free sterols at low temperatures, while at high temperatures the situation was the reverse. Generally, oxidation reactions were more significant at temperatures above 100°C. At lower temperatures, the significance of these reactions increased with increasing reaction time. The effect of lipid matrix composition was dependent on temperature; at temperatures above 140°C, phytosterols were more stable in an unsaturated lipid matrix, whereas below 140°C they were more stable in a saturated lipid matrix. At 140°C, phytosterols oxidized at the same rate in both matrices. Regardless of temperature, phytostanols oxidized more in an unsaturated lipid matrix. Generally, the distribution of oxidation products seemed to be associated with the phase of overall oxidation. 7-ketophytosterols accumulated when oxidation had not yet reached the dynamic state. Once this state was attained, the major products were 5,6-epoxyphytosterols and 7-hydroxyphytosterols. The changes observed in phytostanol oxidation products were not as informative since all stanol oxides quantified represented hydroxyl compounds. The formation of these secondary oxidation products did not account for all of the phytosterol/stanol losses observed during the heating experiments, indicating the presence of dimeric, oligomeric or other oxidation products, especially when free phytosterols and stanols were heated at high temperatures. Commercially available phytosterol/stanol ingredients were stable during such food processes as spray-drying and ultra high temperature (UHT)-type heating and subsequent long-term storage. Pan-frying, however, induced phytosterol oxidation and was classified as a rather deteriorative process. Overall, the findings indicated that although phytosterols and stanols are stable in normal food processing conditions, attention should be paid to their use in frying. Complex interactions between other food constituents also suggested that when new phytosterol-enriched foods are developed their oxidative stability must first be established. The results presented here will assist in choosing safe conditions for phytosterol/stanol enrichment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is common to model the dynamics of fisheries using natural and fishing mortality rates estimated independently using two separate analyses. Fishing mortality is routinely estimated from widely available logbook data, whereas natural mortality estimations have often required more specific, less frequently available, data. However, in the case of the fishery for brown tiger prawn (Penaeus esculentus) in Moreton Bay, both fishing and natural mortality rates have been estimated from logbook data. The present work extended the fishing mortality model to incorporate an eco-physiological response of tiger prawn to temperature, and allowed recruitment timing to vary from year to year. These ecological characteristics of the dynamics of this fishery were ignored in the separate model that estimated natural mortality. Therefore, we propose to estimate both natural and fishing mortality rates within a single model using a consistent set of hypotheses. This approach was applied to Moreton Bay brown tiger prawn data collected between 1990 and 2010. Natural mortality was estimated by maximum likelihood to be equal to 0.032 ± 0.002 week−1, approximately 30% lower than the fixed value used in previous models of this fishery (0.045 week−1).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis examines the right to self-determination which is a norm used for numerous purposes by multiple actors in the field of international relations, with relatively little clarity or agreement on the actual and potential meaning of the right. In international practice, however, the main focus in applying the right has been in the context of decolonization as set by the United Nations in its early decades. Thus, in Africa the right to self-determination has traditionally implied that the colonial territories, and particularly the populations within these territories, were to constitute the people who were entitled to the right. That is, self-determination by decolonization provided a framework for the construction of independent nation-states in Africa whilst other dimensions of the right remained largely or totally neglected. With the objective of assessing the scope, content, developments and interpretations of the right to self-determination in Africa, particularly with regard to the relevance of the right today, the thesis proceeds on two fundamental hypotheses. The first is that Mervyn Frost s theory of settled norms, among which he lists the right to self-determination, assumes too much. Even if the right to self-determination is a human right belonging to all peoples stipulated, inter alia, in the first Article of the 1966 International Human Rights Covenants, it is a highly politicized and context-bound right instead of being settled and observed in a way that its denial would need special justification. Still, the suggested inconsistency or non-compliance with the norm of self-determination is not intended to prove the uselessness or inappropriateness of the norm, but, on the contrary, to invite and encourage debate on the potential use and coverage of the right to self-determination. The second hypothesis is that within the concept of self-determination there are two normative dimensions. One is to do with the idea and practice of statehood, the nation and collectivity that may decide to conduct itself as an independent state. The other one is to do with self-determination as a human right, as a normative condition, to be enjoyed by people and peoples within states that supersedes state authority. These external and internal dimensions need to be seen as complementary and co-terminous, not as mutually exclusive alternatives. The thesis proceeds on the assumption that the internal dimension of the right, with human rights and democracy at its core, has not been considered as important as the external. In turn, this unbalanced and selective interpretation has managed to put the true normative purpose of the right making the world better and bringing more just polity models into a somewhat peculiar light. The right to self-determination in the African context is assessed through case studies of Western Sahara, Southern Sudan and Eritrea. The study asks what these cases say about the right to self-determination in Africa and what their lessons learnt could contribute to the understanding and relevance of the right in today s Africa. The study demonstrates that even in the context of decolonization, the application of the right to self-determination has been far from the consistent approach supposedly followed by the international community: in many respects similar colonial histories have easily led to rather different destinies. While Eritrea secured internationally recognized right to self-determination in the form of retroactive independence in 1993, international recognition of distinct Western Sahara and Southern Sudan entities is contingent on complex and problematic conditions being satisfied. Overall, it is a considerable challenge for international legality to meet empirical political reality in a meaningful way, so that the universal values attached to the norm of self-determination are not overlooked or compromised but rather reinforced in the process of implementing the right. Consequently, this thesis seeks a more comprehensive understanding of the right to self-determination with particular reference to post-colonial Africa and with an emphasis on the internal, human rights and democracy dimensions of the norm. It is considered that the right to self-determination cannot be perceived only as an inter-state issue as it is also very much an intra-state issue, including the possibility of different sub-state arrangements exercised under the right, for example, in the form of autonomy. At the same time, the option of independent statehood achieved through secession remains a mode of exercising and part of the right to self-determination. But in whatever form or way applied, the right to self-determination, as a normative instrument, should constitute and work as a norm that comprehensively brings more added value in terms of the objectives of human rights and democracy. From a normative perspective, a peoples right should not be allowed to transform and convert itself into a right of states. Finally, in light of the case studies of Western Sahara, Southern Sudan and Eritrea, the thesis suggests that our understanding of the right to self-determination should now reach beyond the post-colonial context in Africa. It appears that both the questions and answers to the most pertinent issues of self-determination in the cases studied must be increasingly sought within the postcolonial African state rather than solely in colonial history. In this vein, the right to self-determination can be seen not only as a tool for creating states but also as a way to transform the state itself from within. Any such genuinely post-colonial approach may imply a judicious reconsideration, adaptation or up-dating of the right and our understanding of it in order to render it meaningful in Africa today.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a very recent study [1] the Renormalisation Group (RNG) turbulence model was used to obtain flow predictions in a strongly swirling quarl burner, and was found to perform well in predicting certain features that are not well captured using less sophisticated models of turbulence. The implication is that the RNG approach should provide an economical and reliable tool for the prediction of swirling flows in combustor and furnace geometries commonly encountered in technological applications. To test this hypothesis the present work considers flow in a model furnace for which experimental data is available [2]. The essential features of the flow which differentiate it from the previous study [1] are that the annular air jet entry is relatively narrow and the base wall of the cylindrical furnace is at 90 degrees to the inlet pipe. For swirl numbers of order 1 the resulting flow is highly complex with significant inner and outer recirculation regions. The RNG and standard k-epsilon models are used to model the flow for both swirling and non-swirling entry jets and the results compared with experimental data [2]. Near wall viscous effects are accounted for in both models via the standard wall function formulation [3]. For the RNG model, additional computations with grid placement extending well inside the near wall viscous-affected sublayer are performed in order to assess the low Reynolds number capabilities of the model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Self-tracking, the process of recording one's own behaviours, thoughts and feelings, is a popular approach to enhance one's self-knowledge. While dedicated self-tracking apps and devices support data collection, previous research highlights that the integration of data constitutes a barrier for users. In this study we investigated how members of the Quantified Self movement---early adopters of self-tracking tools---overcome these barriers. We conducted a qualitative analysis of 51 videos of Quantified Self presentations to explore intentions for collecting data, methods for integrating and representing data, and how intentions and methods shaped reflection. The findings highlight two different intentions---striving for self-improvement and curiosity in personal data---which shaped how these users integrated data, i.e. the effort required. Furthermore, we identified three methods for representing data---binary, structured and abstract---which influenced reflection. Binary representations supported reflection-in-action, whereas structured and abstract representations supported iterative processes of data collection, integration and reflection. For people tracking out of curiosity, this iterative engagement with personal data often became an end in itself, rather than a means to achieve a goal. We discuss how these findings contribute to our current understanding of self-tracking amongst Quantified Self members and beyond, and we conclude with directions for future work to support self-trackers with their aspirations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Poor clinical handover has been associated with inaccurate clinical assessment and diagnosis, delays in diagnosis and test ordering, medication errors and decreased patient satisfaction in the acute care setting. Research on the handover process in the residential aged care sector is very limited. Purpose The aims of this study were to: (i) Develop an in-depth understanding of the handover process in aged care by mapping all the key activities and their information dynamics, (ii) Identify gaps in information exchange in the handover process and analyze implications for resident safety, (iii) Develop practical recommendations on how information communication technology (ICT) can improve the process and resident safety. Methods The study was undertaken at a large metropolitan facility in NSW with more than 300 residents and a staff including 55 registered nurses (RNs) and 146 assistants in nursing (AINs). A total of 3 focus groups, 12 interviews and 3 observation sessions were conducted over a period from July to October 2010. Process mapping was undertaken by translating the qualitative data via a five-category code book that was developed prior to the analysis. Results Three major sub-processes were identified and mapped. The three major stages are Handover process (HOP) I “Information gathering by RN”, HOP II “Preparation of preliminary handover sheet” and HOP III “Execution of handover meeting”. Inefficient processes were identified in relation to the handover including duplication of information, utilization of multiple communication modes and information sources, and lack of standardization. Conclusion By providing a robust process model of handover this study has made two critical contributions to research in aged care: (i) a means to identify important, possibly suboptimal practices; and (ii) valuable evidence to plan and improve ICT implementation in residential aged care. The mapping of this process enabled analysis of gaps in information flow and potential impacts on resident safety. In addition it offers the basis for further studies into a process that, despite its importance for securing resident safety and continuity of care, lacks research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The knowledge of hydrological variables (e. g. soil moisture, evapotranspiration) are of pronounced importance in various applications including flood control, agricultural production and effective water resources management. These applications require the accurate prediction of hydrological variables spatially and temporally in watershed/basin. Though hydrological models can simulate these variables at desired resolution (spatial and temporal), often they are validated against the variables, which are either sparse in resolution (e. g. soil moisture) or averaged over large regions (e. g. runoff). A combination of the distributed hydrological model (DHM) and remote sensing (RS) has the potential to improve resolution. Data assimilation schemes can optimally combine DHM and RS. Retrieval of hydrological variables (e. g. soil moisture) from remote sensing and assimilating it in hydrological model requires validation of algorithms using field studies. Here we present a review of methodologies developed to assimilate RS in DHM and demonstrate the application for soil moisture in a small experimental watershed in south India.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have evaluated techniques of estimating animal density through direct counts using line transects during 1988-92 in the tropical deciduous forests of Mudumalai Sanctuary in southern India for four species of large herbivorous mammals, namely, chital (Axis axis), sambar (Cervus unicolor), Asian elephant (Elephas maximus) and gaur (Bos gauras). Density estimates derived from the Fourier Series and the Half-Normal models consistently had the lowest coefficient of variation. These two models also generated similar mean density estimates. For the Fourier Series estimator, appropriate cut-off widths for analysing line transect data for the four species are suggested. Grouping data into various distance classes did not produce any appreciable differences in estimates of mean density or their variances, although model fit is generally better when data are placed in fewer groups. The sampling effort needed to achieve a desired precision (coefficient of variation) in the density estimate is derived. A sampling effort of 800 km of transects returned a 10% coefficient of variation on estimate for chital; for the other species a higher effort was needed to achieve this level of precision. There was no statistically significant relationship between detectability of a group and the size of the group for any species. Density estimates along roads were generally significantly different from those in the interior af the forest, indicating that road-side counts may not be appropriate for most species.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Numerical weather prediction (NWP) models provide the basis for weather forecasting by simulating the evolution of the atmospheric state. A good forecast requires that the initial state of the atmosphere is known accurately, and that the NWP model is a realistic representation of the atmosphere. Data assimilation methods are used to produce initial conditions for NWP models. The NWP model background field, typically a short-range forecast, is updated with observations in a statistically optimal way. The objective in this thesis has been to develope methods in order to allow data assimilation of Doppler radar radial wind observations. The work has been carried out in the High Resolution Limited Area Model (HIRLAM) 3-dimensional variational data assimilation framework. Observation modelling is a key element in exploiting indirect observations of the model variables. In the radar radial wind observation modelling, the vertical model wind profile is interpolated to the observation location, and the projection of the model wind vector on the radar pulse path is calculated. The vertical broadening of the radar pulse volume, and the bending of the radar pulse path due to atmospheric conditions are taken into account. Radar radial wind observations are modelled within observation errors which consist of instrumental, modelling, and representativeness errors. Systematic and random modelling errors can be minimized by accurate observation modelling. The impact of the random part of the instrumental and representativeness errors can be decreased by calculating spatial averages from the raw observations. Model experiments indicate that the spatial averaging clearly improves the fit of the radial wind observations to the model in terms of observation minus model background (OmB) standard deviation. Monitoring the quality of the observations is an important aspect, especially when a new observation type is introduced into a data assimilation system. Calculating the bias for radial wind observations in a conventional way can result in zero even in case there are systematic differences in the wind speed and/or direction. A bias estimation method designed for this observation type is introduced in the thesis. Doppler radar radial wind observation modelling, together with the bias estimation method, enables the exploitation of the radial wind observations also for NWP model validation. The one-month model experiments performed with the HIRLAM model versions differing only in a surface stress parameterization detail indicate that the use of radar wind observations in NWP model validation is very beneficial.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work belongs to the field of computational high-energy physics (HEP). The key methods used in this thesis work to meet the challenges raised by the Large Hadron Collider (LHC) era experiments are object-orientation with software engineering, Monte Carlo simulation, the computer technology of clusters, and artificial neural networks. The first aspect discussed is the development of hadronic cascade models, used for the accurate simulation of medium-energy hadron-nucleus reactions, up to 10 GeV. These models are typically needed in hadronic calorimeter studies and in the estimation of radiation backgrounds. Various applications outside HEP include the medical field (such as hadron treatment simulations), space science (satellite shielding), and nuclear physics (spallation studies). Validation results are presented for several significant improvements released in Geant4 simulation tool, and the significance of the new models for computing in the Large Hadron Collider era is estimated. In particular, we estimate the ability of the Bertini cascade to simulate Compact Muon Solenoid (CMS) hadron calorimeter HCAL. LHC test beam activity has a tightly coupled cycle of simulation-to-data analysis. Typically, a Geant4 computer experiment is used to understand test beam measurements. Thus an another aspect of this thesis is a description of studies related to developing new CMS H2 test beam data analysis tools and performing data analysis on the basis of CMS Monte Carlo events. These events have been simulated in detail using Geant4 physics models, full CMS detector description, and event reconstruction. Using the ROOT data analysis framework we have developed an offline ANN-based approach to tag b-jets associated with heavy neutral Higgs particles, and we show that this kind of NN methodology can be successfully used to separate the Higgs signal from the background in the CMS experiment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of time variant reliability analysis of existing structures subjected to stationary random dynamic excitations is considered. The study assumes that samples of dynamic response of the structure, under the action of external excitations, have been measured at a set of sparse points on the structure. The utilization of these measurements m in updating reliability models, postulated prior to making any measurements, is considered. This is achieved by using dynamic state estimation methods which combine results from Markov process theory and Bayes' theorem. The uncertainties present in measurements as well as in the postulated model for the structural behaviour are accounted for. The samples of external excitations are taken to emanate from known stochastic models and allowance is made for ability (or lack of it) to measure the applied excitations. The future reliability of the structure is modeled using expected structural response conditioned on all the measurements made. This expected response is shown to have a time varying mean and a random component that can be treated as being weakly stationary. For linear systems, an approximate analytical solution for the problem of reliability model updating is obtained by combining theories of discrete Kalman filter and level crossing statistics. For the case of nonlinear systems, the problem is tackled by combining particle filtering strategies with data based extreme value analysis. In all these studies, the governing stochastic differential equations are discretized using the strong forms of Ito-Taylor's discretization schemes. The possibility of using conditional simulation strategies, when applied external actions are measured, is also considered. The proposed procedures are exemplifiedmby considering the reliability analysis of a few low-dimensional dynamical systems based on synthetically generated measurement data. The performance of the procedures developed is also assessed based on a limited amount of pertinent Monte Carlo simulations. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is composed of an introductory chapter and four applications each of them constituting an own chapter. The common element underlying each of the chapters is the econometric methodology. The applications rely mostly on the leading econometric techniques related to estimation of causal effects. The first chapter introduces the econometric techniques that are employed in the remaining chapters. Chapter 2 studies the effects of shocking news on student performance. It exploits the fact that the school shooting in Kauhajoki in 2008 coincided with the matriculation examination period of that fall. It shows that the performance of men declined due to the news of the school shooting. For women the similar pattern remains unobserved. Chapter 3 studies the effects of minimum wage on employment by employing the original Card and Krueger (1994; CK) and Neumark and Wascher (2000; NW) data together with the changes-in-changes (CIC) estimator. As the main result it shows that the employment effect of an increase in the minimum wage is positive for small fast-food restaurants and negative for big fast-food restaurants. Therefore, it shows that the controversial positive employment effect reported by CK is overturned for big fast-food restaurants and that the NW data are shown, in contrast to their original results, to provide support for the positive employment effect. Chapter 4 employs the state-specific U.S. data (collected by Cohen and Einav [2003; CE]) on traffic fatalities to re-evaluate the effects of seat belt laws on the traffic fatalities by using the CIC estimator. It confirms the CE results that on the average an implementation of a mandatory seat belt law results in an increase in the seat belt usage rate and a decrease in the total fatality rate. In contrast to CE, it also finds evidence on compensating-behavior theory, which is observed especially in the states by the border of the U.S. Chapter 5 studies the life cycle consumption in Finland, with the special interest laid on the baby boomers and the older households. It shows that the baby boomers smooth their consumption over the life cycle more than other generations. It also shows that the old households smoothed their life cycle consumption more as a result of the recession in the 1990s, compared to young households.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ewing sarcoma is an aggressive and poorly differentiated malignancy of bone and soft tissue. It primarily affects children, adolescents, and young adults, with a slight male predominance. It is characterized by a translocation between chromosomes 11 and 22 resulting in the EWSR1-FLI1fusion transcription factor. The aim of this study is to identify putative Ewing sarcoma target genes through an integrative analysis of three microarray data sets. Array comparative genomic hybridization is used to measure changes in DNA copy number, and analyzed to detect common chromosomal aberrations. mRNA and miRNA microarrays are used to measure expression of protein-coding and miRNA genes, and these results integrated with the copy number data. Chromosomal aberrations typically contain also bystanders in addition to the driving tumor suppressor and oncogenes, and integration with expression helps to identify the true targets. Correlation between expression of miRNAs and their predicted target mRNAs is also evaluated to assess the results of post-transcriptional miRNA regulation on mRNA levels. The highest frequencies of copy number gains were identified in chromosome 8, 1q, and X. Losses were most frequent in 9p21.3, which also showed an enrichment of copy number breakpoints relative to the rest of the genome. Copy number losses in 9p21.3 were found have a statistically significant effect on the expression of MTAP, but not on CDKN2A, which is a known tumor-suppressor in the same locus. MTAP was also down-regulated in the Ewing sarcoma cell lines compared to mesenchymal stem cells. Genes exhibiting elevated expression in association with copy number gains and up-regulation compared to the reference samples included DCAF7, ENO2, MTCP1, andSTK40. Differentially expressed miRNAs were detected by comparing Ewing sarcoma cell lines against mesenchymal stem cells. 21 up-regulated and 32 down-regulated miRNAs were identified, includingmiR-145, which has been previously linked to Ewing sarcoma. The EWSR1-FLI1 fusion gene represses miR-145, which in turn targets FLI1 forming a mutually repressive feedback loop. In addition higher expression linked to copy number gains and compared to mesenchymal stem cells, STK40 was also found to be a target of four different miRNAs that were all down-regulated in Ewing sarcoma cell lines compared to the reference samples. SLCO5A1 was identified as the only up-regulated gene within a frequently gained region in chromosome 8. This region was gained in over 90 % of the cell lines, and also with a higher frequency than the neighboring regions. In addition, SLCO5A1 was found to be a target of three miRNAs that were down-regulated compared to the mesenchymal stem cells.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, reduced level of rock at Bangalore, India is arrived from the 652 boreholes data in the area covering 220 sq.km. In the context of prediction of reduced level of rock in the subsurface of Bangalore and to study the spatial variability of the rock depth, ordinary kriging and Support Vector Machine (SVM) models have been developed. In ordinary kriging, the knowledge of the semivariogram of the reduced level of rock from 652 points in Bangalore is used to predict the reduced level of rock at any point in the subsurface of Bangalore, where field measurements are not available. A cross validation (Q1 and Q2) analysis is also done for the developed ordinary kriging model. The SVM is a novel type of learning machine based on statistical learning theory, uses regression technique by introducing e-insensitive loss function has been used to predict the reduced level of rock from a large set of data. A comparison between ordinary kriging and SVM model demonstrates that the SVM is superior to ordinary kriging in predicting rock depth.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a novel algorithm for compression of single lead Electrocardiogram (ECG) signals. The method is based on Pole-Zero modelling of the Discrete Cosine Transformed (DCT) signal. An extension is proposed to the well known Steiglitz-Hcbride algorithm, to model the higher frequency components of the input signal more accurately. This is achieved by weighting the error function minimized by the algorithm to estimate the model parameters. The data compression achieved by the parametric model is further enhanced by Differential Pulse Code Modulation (DPCM) of the model parameters. The method accomplishes a compression ratio in the range of 1:20 to 1:40, which far exceeds those achieved by most of the current methods.