892 resultados para Modeling and Simulation Challenges


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This chapter examines the state of evaluation training programs at European universities in 2012. It summarises the results of a survey that was conducted among representatives of 15 programs located in Belgium, Denmark, Greece, Italy, France, The Netherlands, Romania, Spain, Sweden and Switzerland. Some basic information about the programs are reported (e.g. organising body, degree offered, admission requirements, duration in months, price), as well as the programs’ core subjects and learning outcomes. The chapter discusses the challenges for university-based study programmes that arise from the current situation of the evaluation profession, and concludes with some thoughts on education and training as requirements for professionalisation in evaluation

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, disaster preparedness through assessment of medical and special needs persons (MSNP) has taken a center place in public eye in effect of frequent natural disasters such as hurricanes, storm surge or tsunami due to climate change and increased human activity on our planet. Statistical methods complex survey design and analysis have equally gained significance as a consequence. However, there exist many challenges still, to infer such assessments over the target population for policy level advocacy and implementation. ^ Objective. This study discusses the use of some of the statistical methods for disaster preparedness and medical needs assessment to facilitate local and state governments for its policy level decision making and logistic support to avoid any loss of life and property in future calamities. ^ Methods. In order to obtain precise and unbiased estimates for Medical Special Needs Persons (MSNP) and disaster preparedness for evacuation in Rio Grande Valley (RGV) of Texas, a stratified and cluster-randomized multi-stage sampling design was implemented. US School of Public Health, Brownsville surveyed 3088 households in three counties namely Cameron, Hidalgo, and Willacy. Multiple statistical methods were implemented and estimates were obtained taking into count probability of selection and clustering effects. Statistical methods for data analysis discussed were Multivariate Linear Regression (MLR), Survey Linear Regression (Svy-Reg), Generalized Estimation Equation (GEE) and Multilevel Mixed Models (MLM) all with and without sampling weights. ^ Results. Estimated population for RGV was 1,146,796. There were 51.5% female, 90% Hispanic, 73% married, 56% unemployed and 37% with their personal transport. 40% people attained education up to elementary school, another 42% reaching high school and only 18% went to college. Median household income is less than $15,000/year. MSNP estimated to be 44,196 (3.98%) [95% CI: 39,029; 51,123]. All statistical models are in concordance with MSNP estimates ranging from 44,000 to 48,000. MSNP estimates for statistical methods are: MLR (47,707; 95% CI: 42,462; 52,999), MLR with weights (45,882; 95% CI: 39,792; 51,972), Bootstrap Regression (47,730; 95% CI: 41,629; 53,785), GEE (47,649; 95% CI: 41,629; 53,670), GEE with weights (45,076; 95% CI: 39,029; 51,123), Svy-Reg (44,196; 95% CI: 40,004; 48,390) and MLM (46,513; 95% CI: 39,869; 53,157). ^ Conclusion. RGV is a flood zone, most susceptible to hurricanes and other natural disasters. People in the region are mostly Hispanic, under-educated with least income levels in the U.S. In case of any disaster people in large are incapacitated with only 37% have their personal transport to take care of MSNP. Local and state government’s intervention in terms of planning, preparation and support for evacuation is necessary in any such disaster to avoid loss of precious human life. ^ Key words: Complex Surveys, statistical methods, multilevel models, cluster randomized, sampling weights, raking, survey regression, generalized estimation equations (GEE), random effects, Intracluster correlation coefficient (ICC).^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives. This paper seeks to assess the effect on statistical power of regression model misspecification in a variety of situations. ^ Methods and results. The effect of misspecification in regression can be approximated by evaluating the correlation between the correct specification and the misspecification of the outcome variable (Harris 2010).In this paper, three misspecified models (linear, categorical and fractional polynomial) were considered. In the first section, the mathematical method of calculating the correlation between correct and misspecified models with simple mathematical forms was derived and demonstrated. In the second section, data from the National Health and Nutrition Examination Survey (NHANES 2007-2008) were used to examine such correlations. Our study shows that comparing to linear or categorical models, the fractional polynomial models, with the higher correlations, provided a better approximation of the true relationship, which was illustrated by LOESS regression. In the third section, we present the results of simulation studies that demonstrate overall misspecification in regression can produce marked decreases in power with small sample sizes. However, the categorical model had greatest power, ranging from 0.877 to 0.936 depending on sample size and outcome variable used. The power of fractional polynomial model was close to that of linear model, which ranged from 0.69 to 0.83, and appeared to be affected by the increased degrees of freedom of this model.^ Conclusion. Correlations between alternative model specifications can be used to provide a good approximation of the effect on statistical power of misspecification when the sample size is large. When model specifications have known simple mathematical forms, such correlations can be calculated mathematically. Actual public health data from NHANES 2007-2008 were used as examples to demonstrate the situations with unknown or complex correct model specification. Simulation of power for misspecified models confirmed the results based on correlation methods but also illustrated the effect of model degrees of freedom on power.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Obesity, among both children and adults, is a growing public health epidemic. One area of interest relates to how and why obesity is developing at such a rapid pace among children. Despite a broad consensus about how controlling feeding practices relate to child food consumption and obesity prevalence, much less is known about how non-controlling feeding practices, including modeling, relate to child food consumption. This study investigates how different forms of parent modeling (no modeling, simple modeling, and enthusiastic modeling) and parent adiposity relate to child food consumption, food preferences, and behaviors towards foods. Participants in this experimental study were 65 children (25 boys and 40 girls) aged 3-9 and their parents. Each parent was trained on how to perform their assigned modeling behavior towards a food identified as neutral (not liked, nor disliked) by their child during a pre-session food-rating task. Parents performed their assigned modeling behavior when cued during a ten-minute observation period with their child. Child food consumption (pieces eaten, grams eaten, and calories consumed) was measured and food behaviors (positive comments toward food and food requests) were recorded by event-based coding. After the session, parents self-reported on their height and weight, and children completed a post-session food-rating task. Results indicate that parent modeling (both simple and enthusiastic forms) did not significantly relate to child food consumption, food preferences, or food requests. However, enthusiastic modeling significantly increased the number of positive food comments made by children. Children's food consumption in response to parent modeling did not differ based on parent obesity status. The practical implications of this study are discussed, along with its strengths and limitations, and directions for future research.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper analyzes factors associated with the rejection of products at ports of importer countries and remedial actions taken by producers in China. As an example, it uses one of the most competitive agro-food products of China: live and processed eels. This paper provides an overview of eel production and trade trends in China. In addition, it identifies the causes of port rejection of Chinese eel products as veterinary drug residues by examining the detailed case studies of export firms and the countermeasures taken by the government and firms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a novel framework for encoding latency analysis of arbitrary multiview video coding prediction structures. This framework avoids the need to consider an specific encoder architecture for encoding latency analysis by assuming an unlimited processing capacity on the multiview encoder. Under this assumption, only the influence of the prediction structure and the processing times have to be considered, and the encoding latency is solved systematically by means of a graph model. The results obtained with this model are valid for a multiview encoder with sufficient processing capacity and serve as a lower bound otherwise. Furthermore, with the objective of low latency encoder design with low penalty on rate-distortion performance, the graph model allows us to identify the prediction relationships that add higher encoding latency to the encoder. Experimental results for JMVM prediction structures illustrate how low latency prediction structures with a low rate-distortion penalty can be derived in a systematic manner using the new model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nanoinformatics has recently emerged to address the need of computing applications at the nano level. In this regard, the authors have participated in various initiatives to identify its concepts, foundations and challenges. While nanomaterials open up the possibility for developing new devices in many industrial and scientific areas, they also offer breakthrough perspectives for the prevention, diagnosis and treatment of diseases. In this paper, we analyze the different aspects of nanoinformatics and suggest five research topics to help catalyze new research and development in the area, particularly focused on nanomedicine. We also encompass the use of informatics to further the biological and clinical applications of basic research in nanoscience and nanotechnology, and the related concept of an extended ?nanotype? to coalesce information related to nanoparticles. We suggest how nanoinformatics could accelerate developments in nanomedicine, similarly to what happened with the Human Genome and other -omics projects, on issues like exchanging modeling and simulation methods and tools, linking toxicity information to clinical and personal databases or developing new approaches for scientific ontologies, among many others.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over a decade ago, nanotechnologists began research on applications of nanomaterials for medicine. This research has revealed a wide range of different challenges, as well as many opportunities. Some of these challenges are strongly related to informatics issues, dealing, for instance, with the management and integration of heterogeneous information, defining nomenclatures, taxonomies and classifications for various types of nanomaterials, and research on new modeling and simulation techniques for nanoparticles. Nanoinformatics has recently emerged in the USA and Europe to address these issues. In this paper, we present a review of nanoinformatics, describing its origins, the problems it addresses, areas of interest, and examples of current research initiatives and informatics resources. We suggest that nanoinformatics could accelerate research and development in nanomedicine, as has occurred in the past in other fields. For instance, biomedical informatics served as a fundamental catalyst for the Human Genome Project, and other genomic and ?omics projects, as well as the translational efforts that link resulting molecular-level research to clinical problems and findings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Light trapping is becoming of increasing importance in crystalline silicon solar cells as thinner wafers are used to reduce costs. In this work, we report on light trapping by rear-side diffraction gratings produced by nano-imprint lithography using interference lithography as the mastering technology. Gratings fabricated on crystalline silicon wafers are shown to provide significant absorption enhancements. Through a combination of optical measurement and simulation, it is shown that the crossed grating provides better absorption enhancement than the linear grating, and that the parasitic reflector absorption is reduced by planarizing the rear reflector, leading to an increase in the useful absorption in the silicon. Finally, electro-optical simulations are performed of solar cells employing the fabricated grating structures to estimate efficiency enhancement potential.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nanotechnology represents an area of particular promise and significant opportunity across multiple scientific disciplines. Ongoing nanotechnology research ranges from the characterization of nanoparticles and nanomaterials to the analysis and processing of experimental data seeking correlations between nanoparticles and their functionalities and side effects. Due to their special properties, nanoparticles are suitable for cellular-level diagnostics and therapy, offering numerous applications in medicine, e.g. development of biomedical devices, tissue repair, drug delivery systems and biosensors. In nanomedicine, recent studies are producing large amounts of structural and property data, highlighting the role for computational approaches in information management. While in vitro and in vivo assays are expensive, the cost of computing is falling. Furthermore, improvements in the accuracy of computational methods (e.g. data mining, knowledge discovery, modeling and simulation) have enabled effective tools to automate the extraction, management and storage of these vast data volumes. Since this information is widely distributed, one major issue is how to locate and access data where it resides (which also poses data-sharing limitations). The novel discipline of nanoinformatics addresses the information challenges related to nanotechnology research. In this paper, we summarize the needs and challenges in the field and present an overview of extant initiatives and efforts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Atomic Physics Group at the Institute of Nuclear Fusion (DENIM) in Spain has accumulated experience over the years in developing a collection of computational models and tools for determining some relevant microscopic properties of, mainly, ICF and laser-produced plasmas in a variety of conditions. In this work several applications of those models in determining some relevant microscopic properties are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mechanical degradation of tungsten alloys at extreme temperatures in vacuum and oxidation atmospheres.