981 resultados para Eclipse modeling framework (EMF)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heterogeneous materials are ubiquitous in nature and as synthetic materials. These materials provide unique combination of desirable mechanical properties emerging from its heterogeneities at different length scales. Future structural and technological applications will require the development of advanced light weight materials with superior strength and toughness. Cost effective design of the advanced high performance synthetic materials by tailoring their microstructure is the challenge facing the materials design community. Prior knowledge of structure-property relationships for these materials is imperative for optimal design. Thus, understanding such relationships for heterogeneous materials is of primary interest. Furthermore, computational burden is becoming critical concern in several areas of heterogeneous materials design. Therefore, computationally efficient and accurate predictive tools are highly essential. In the present study, we mainly focus on mechanical behavior of soft cellular materials and tough biological material such as mussel byssus thread. Cellular materials exhibit microstructural heterogeneity by interconnected network of same material phase. However, mussel byssus thread comprises of two distinct material phases. A robust numerical framework is developed to investigate the micromechanisms behind the macroscopic response of both of these materials. Using this framework, effect of microstuctural parameters has been addressed on the stress state of cellular specimens during split Hopkinson pressure bar test. A voronoi tessellation based algorithm has been developed to simulate the cellular microstructure. Micromechanisms (microinertia, microbuckling and microbending) governing macroscopic behavior of cellular solids are investigated thoroughly with respect to various microstructural and loading parameters. To understand the origin of high toughness of mussel byssus thread, a Genetic Algorithm (GA) based optimization framework has been developed. It is found that two different material phases (collagens) of mussel byssus thread are optimally distributed along the thread. These applications demonstrate that the presence of heterogeneity in the system demands high computational resources for simulation and modeling. Thus, Higher Dimensional Model Representation (HDMR) based surrogate modeling concept has been proposed to reduce computational complexity. The applicability of such methodology has been demonstrated in failure envelope construction and in multiscale finite element techniques. It is observed that surrogate based model can capture the behavior of complex material systems with sufficient accuracy. The computational algorithms presented in this thesis will further pave the way for accurate prediction of macroscopic deformation behavior of various class of advanced materials from their measurable microstructural features at a reasonable computational cost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PDP++ is a freely available, open source software package designed to support the development, simulation, and analysis of research-grade connectionist models of cognitive processes. It supports most popular parallel distributed processing paradigms and artificial neural network architectures, and it also provides an implementation of the LEABRA computational cognitive neuroscience framework. Models are typically constructed and examined using the PDP++ graphical user interface, but the system may also be extended through the incorporation of user-written C++ code. This article briefly reviews the features of PDP++, focusing on its utility for teaching cognitive modeling concepts and skills to university undergraduate and graduate students. An informal evaluation of the software as a pedagogical tool is provided, based on the author’s classroom experiences at three research universities and several conference-hosted tutorials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

How do probabilistic models represent their targets and how do they allow us to learn about them? The answer to this question depends on a number of details, in particular on the meaning of the probabilities involved. To classify the options, a minimalist conception of representation (Su\'arez 2004) is adopted: Modelers devise substitutes (``sources'') of their targets and investigate them to infer something about the target. Probabilistic models allow us to infer probabilities about the target from probabilities about the source. This leads to a framework in which we can systematically distinguish between different models of probabilistic modeling. I develop a fully Bayesian view of probabilistic modeling, but I argue that, as an alternative, Bayesian degrees of belief about the target may be derived from ontic probabilities about the source. Remarkably, some accounts of ontic probabilities can avoid problems if they are supposed to apply to sources only.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

At first sight, experimenting and modeling form two distinct modes of scientific inquiry. This spurs philosophical debates about how the distinction should be drawn (e.g. Morgan 2005, Winsberg 2009, Parker 2009). But much scientific practice casts serious doubts on the idea that the distinction makes much sense. There are two worries. First, the practices of modeling and experimenting are often intertwined in intricate ways because much modeling involves experimenting, and the interpretation of many experiments relies upon models. Second, there are borderline cases that seem to blur the distinction between experiment and model (if there is any). My talk tries to defend the philosophical project of distinguishing models from experiment and to advance the related philosophical debate. I begin with providing a minimalist framework of conceptualizing experimenting and modeling and their mutual relationships. The methods are conceptualized as different types of activities that are characterized by a primary goal, respectively. The minimalist framwork, which should be uncontroversial, suffices to accommodate the first worry. I address the second worry by suggesting several ways how to conceptualize the distinction in a more flexible way. I make a concrete suggestion of how the distinction may be drawn. I use examples from the history of science to argue my case. The talk concentrates and models and experiments, but I will comment on simulations too.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hodgkin's disease (HD) is a cancer of the lymphatic system. Survivors of HD face varieties of consequent adverse effects, in which secondary primary tumors (SPT) is one of the most serious consequences. This dissertation is aimed to model time-to-SPT in the presence of death and HD relapses during follow-up.^ The model is designed to handle a mixture phenomenon of SPT and the influence of death. Relapses of HD are adjusted as a covariate. Proportional hazards framework is used to define SPT intensity function, which includes an exponential term to estimate explanatory variables. Death as a competing risk is considered according to different scenarios, depending on which terminal event comes first. Newton-Raphson method is used to estimate the parameter estimates in the end.^ The proposed method is applied to a real data set containing a group of HD patients. Several risk factors for the development of SPT are identified and the findings are noteworthy in the development of healthcare guidelines that may lead to the early detection or prevention of SPT.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mixture modeling is commonly used to model categorical latent variables that represent subpopulations in which population membership is unknown but can be inferred from the data. In relatively recent years, the potential of finite mixture models has been applied in time-to-event data. However, the commonly used survival mixture model assumes that the effects of the covariates involved in failure times differ across latent classes, but the covariate distribution is homogeneous. The aim of this dissertation is to develop a method to examine time-to-event data in the presence of unobserved heterogeneity under a framework of mixture modeling. A joint model is developed to incorporate the latent survival trajectory along with the observed information for the joint analysis of a time-to-event variable, its discrete and continuous covariates, and a latent class variable. It is assumed that the effects of covariates on survival times and the distribution of covariates vary across different latent classes. The unobservable survival trajectories are identified through estimating the probability that a subject belongs to a particular class based on observed information. We applied this method to a Hodgkin lymphoma study with long-term follow-up and observed four distinct latent classes in terms of long-term survival and distributions of prognostic factors. Our results from simulation studies and from the Hodgkin lymphoma study demonstrated the superiority of our joint model compared with the conventional survival model. This flexible inference method provides more accurate estimation and accommodates unobservable heterogeneity among individuals while taking involved interactions between covariates into consideration.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Even the best school health education programs will be unsuccessful if they are not disseminated effectively in a manner that encourages classroom adoption and implementation. This study involved two components: (1) the development of a videotape intervention to be used in the dissemination phase of a 4-year, NCI-funded diffusion study and (2) the evaluation of that videotape intervention strategy in comparison with a print (information transfer) strategy. Conceptualization has been guided by Social Learning Theory, Diffusion Theory, and communication theory. Additionally, the PRECEDE Framework has been used. Seventh and 8th grade classroom teachers from Spring Branch Independent School District in west Houston participated in the evaluation of the videotape and print interventions using a 57-item preadoption survey instrument developed by the UT Center for Health Promotion Research and Development. Two-way ANOVA was used to study individual score differences for five outcome variables: Total Scale Score (comprised of 57 predisposing, enabling, and reinforcing items), Adoption Characteristics Subscale, Attitude Toward Innovation Subscale, Receptivity Toward Innovation, and Reinforcement Subscale. The aim of the study is to compare the effect upon score differences of video and print interventions alone and in combination. Seventy-three 7th and 8th grade classroom teachers completed the study providing baseline and post-intervention measures on factors related to the adoption and implementation of tobacco-use prevention programs. Two-way ANOVA, in relation to the study questions, found significant scoring differences for those exposed to the videotape intervention alone for both the Attitude Toward Innovation Subscale and the Receptivity to Adopt Subscale. No significant results were found to suggest that print alone influences favorable scoring differences between baseline and post-intervention testing. One interaction effect was found suggesting video and print combined are more effective for influencing favorable scoring differences for the Reinforcement for the Adoption Subscale.^ This research is unique in that it represents a newly emerging field in health promotion communications research with implications for Social Learning Theory, Diffusion Theory, and communication science that are applicable to the development of improved school health interventions. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Public participation is increasingly advocated as a necessary feature of natural resources management. The EU Water Framework Directive (WFD) is such an example, as it prescribes participatory processes as necessary features in basin management plans (EC 2000). The rationale behind this mandate is that involving interest groups ideally yields higher-quality decisions, which are arguably more likely to meet public acceptance (Pahl-Wostl, 2006). Furthermore, failing to involve stakeholders in policy-making might hamper the implementation of management initiatives, as controversial decisions can lead pressure lobbies to generate public opposition (Giordano et al. 2005, Mouratiadou and Moran 2007).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a novel framework for encoding latency analysis of arbitrary multiview video coding prediction structures. This framework avoids the need to consider an specific encoder architecture for encoding latency analysis by assuming an unlimited processing capacity on the multiview encoder. Under this assumption, only the influence of the prediction structure and the processing times have to be considered, and the encoding latency is solved systematically by means of a graph model. The results obtained with this model are valid for a multiview encoder with sufficient processing capacity and serve as a lower bound otherwise. Furthermore, with the objective of low latency encoder design with low penalty on rate-distortion performance, the graph model allows us to identify the prediction relationships that add higher encoding latency to the encoder. Experimental results for JMVM prediction structures illustrate how low latency prediction structures with a low rate-distortion penalty can be derived in a systematic manner using the new model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A clear statement in these lines textually cited (Byers et al., 1938) defines the framework of this special issue: “True soil is the product of the action of climate and living organism upon the parent material, as conditioned by the local relief. The length of time during which these forces are operative is of great importance in determining the character of the ultimate product. Drainage conditions are also important and are controlled by local relief, by the nature of the parent material or underlying rock strata, or by the amount of precipitation in relation to rate of percolation and runoff water. There are, therefore, five principal factors of soil formation: Parent material, climate, biological activity, relief and time. These soil forming factors are interdependent, each modifying the effectiveness of the others.” Due to these various processes associated to its formation and genesis soil dynamics reveals high complexity that creates several levels of structure using this term in a broad sense

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modeling the evolution of the state of program memory during program execution is critical to many parallehzation techniques. Current memory analysis techniques either provide very accurate information but run prohibitively slowly or produce very conservative results. An approach based on abstract interpretation is presented for analyzing programs at compile time, which can accurately determine many important program properties such as aliasing, logical data structures and shape. These properties are known to be critical for transforming a single threaded program into a versión that can be run on múltiple execution units in parallel. The analysis is shown to be of polynomial complexity in the size of the memory heap. Experimental results for benchmarks in the Jolden suite are given. These results show that in practice the analysis method is efflcient and is capable of accurately determining shape information in programs that créate and manipúlate complex data structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis is to study the mechanisms of instability that occur in swept wings when the angle of attack increases. For this, a simplified model for the a simplified model for the non-orthogonal swept leading edge boundary layer has been used as well as different numerical techniques in order to solve the linear stability problem that describes the behavior of perturbations superposed upon this base flow. Two different approaches, matrix-free and matrix forming methods, have been validated using direct numerical simulations with spectral resolution. In this way, flow instability in the non-orthogonal swept attachment-line boundary layer is addressed in a linear analysis framework via the solution of the pertinent global (Bi-Global) PDE-based eigenvalue problem. Subsequently, a simple extension of the extended G¨ortler-H¨ammerlin ODEbased polynomial model proposed by Theofilis, Fedorov, Obrist & Dallmann (2003) for orthogonal flow, which includes previous models as particular cases and recovers global instability analysis results, is presented for non-orthogonal flow. Direct numerical simulations have been used to verify the stability results and unravel the limits of validity of the basic flow model analyzed. The effect of the angle of attack, AoA, on the critical conditions of the non-orthogonal problem has been documented; an increase of the angle of attack, from AoA = 0 (orthogonal flow) up to values close to _/2 which make the assumptions under which the basic flow is derived questionable, is found to systematically destabilize the flow. The critical conditions of non-orthogonal flows at 0 _ AoA _ _/2 are shown to be recoverable from those of orthogonal flow, via a simple analytical transformation involving AoA. These results can help to understand the mechanisms of destabilization that occurs in the attachment line of wings at finite angles of attack. Studies taking into account variations of the pressure field in the basic flow or the extension to compressible flows are issues that remain open. El objetivo de esta tesis es estudiar los mecanismos de la inestabilidad que se producen en ciertos dispositivos aerodinámicos cuando se aumenta el ángulo de ataque. Para ello se ha utilizado un modelo simplificado del flujo de base, así como diferentes técnicas numéricas, con el fin de resolver el problema de estabilidad lineal asociado que describe el comportamiento de las perturbaciones. Estos métodos; sin y con formación de matriz, se han validado utilizando simulaciones numéricas directas con resolución espectral. De esta manera, la inestabilidad del flujo de capa límite laminar oblicuo entorno a la línea de estancamiento se aborda en un marco de análisis lineal por medio del método Bi-Global de resolución del problema de valores propios en derivadas parciales. Posteriormente se propone una extensión simple para el flujo no-ortogonal del modelo polinomial de ecuaciones diferenciales ordinarias, G¨ortler-H¨ammerlin extendido, propuesto por Theofilis et al. (2003) para el flujo ortogonal, que incluye los modelos previos como casos particulares y recupera los resultados del analisis global de estabilidad lineal. Se han realizado simulaciones directas con el fin de verificar los resultados del análisis de estabilidad así como para investigar los límites de validez del modelo de flujo base utilizado. En este trabajo se ha documentado el efecto del ángulo de ataque AoA en las condiciones críticas del problema no ortogonal obteniendo que el incremento del ángulo de ataque, de AoA = 0 (flujo ortogonal) hasta valores próximos a _/2, en el cual las hipótesis sobre las que se basa el flujo base dejan de ser válidas, tiende sistemáticamente a desestabilizar el flujo. Las condiciones críticas del caso no ortogonal 0 _ AoA _ _/2 pueden recuperarse a partir del caso ortogonal mediante el uso de una transformación analítica simple que implica el ángulo de ataque AoA. Estos resultados pueden ayudar a comprender los mecanismos de desestabilización que se producen en el borde de ataque de las alas de los aviones a ángulos de ataque finitos. Como tareas pendientes quedaría realizar estudios que tengan en cuenta variaciones del campo de presión en el flujo base así como la extensión de éste al caso de flujos compresibles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En la actualidad, el seguimiento de la dinámica de los procesos medio ambientales está considerado como un punto de gran interés en el campo medioambiental. La cobertura espacio temporal de los datos de teledetección proporciona información continua con una alta frecuencia temporal, permitiendo el análisis de la evolución de los ecosistemas desde diferentes escalas espacio-temporales. Aunque el valor de la teledetección ha sido ampliamente probado, en la actualidad solo existe un número reducido de metodologías que permiten su análisis de una forma cuantitativa. En la presente tesis se propone un esquema de trabajo para explotar las series temporales de datos de teledetección, basado en la combinación del análisis estadístico de series de tiempo y la fenometría. El objetivo principal es demostrar el uso de las series temporales de datos de teledetección para analizar la dinámica de variables medio ambientales de una forma cuantitativa. Los objetivos específicos son: (1) evaluar dichas variables medio ambientales y (2) desarrollar modelos empíricos para predecir su comportamiento futuro. Estos objetivos se materializan en cuatro aplicaciones cuyos objetivos específicos son: (1) evaluar y cartografiar estados fenológicos del cultivo del algodón mediante análisis espectral y fenometría, (2) evaluar y modelizar la estacionalidad de incendios forestales en dos regiones bioclimáticas mediante modelos dinámicos, (3) predecir el riesgo de incendios forestales a nivel pixel utilizando modelos dinámicos y (4) evaluar el funcionamiento de la vegetación en base a la autocorrelación temporal y la fenometría. Los resultados de esta tesis muestran la utilidad del ajuste de funciones para modelizar los índices espectrales AS1 y AS2. Los parámetros fenológicos derivados del ajuste de funciones permiten la identificación de distintos estados fenológicos del cultivo del algodón. El análisis espectral ha demostrado, de una forma cuantitativa, la presencia de un ciclo en el índice AS2 y de dos ciclos en el AS1 así como el comportamiento unimodal y bimodal de la estacionalidad de incendios en las regiones mediterránea y templada respectivamente. Modelos autorregresivos han sido utilizados para caracterizar la dinámica de la estacionalidad de incendios y para predecir de una forma muy precisa el riesgo de incendios forestales a nivel pixel. Ha sido demostrada la utilidad de la autocorrelación temporal para definir y caracterizar el funcionamiento de la vegetación a nivel pixel. Finalmente el concepto “Optical Functional Type” ha sido definido, donde se propone que los pixeles deberían ser considerados como unidades temporales y analizados en función de su dinámica temporal. ix SUMMARY A good understanding of land surface processes is considered as a key subject in environmental sciences. The spatial-temporal coverage of remote sensing data provides continuous observations with a high temporal frequency allowing the assessment of ecosystem evolution at different temporal and spatial scales. Although the value of remote sensing time series has been firmly proved, only few time series methods have been developed for analyzing this data in a quantitative and continuous manner. In the present dissertation a working framework to exploit Remote Sensing time series is proposed based on the combination of Time Series Analysis and phenometric approach. The main goal is to demonstrate the use of remote sensing time series to analyze quantitatively environmental variable dynamics. The specific objectives are (1) to assess environmental variables based on remote sensing time series and (2) to develop empirical models to forecast environmental variables. These objectives have been achieved in four applications which specific objectives are (1) assessing and mapping cotton crop phenological stages using spectral and phenometric analyses, (2) assessing and modeling fire seasonality in two different ecoregions by dynamic models, (3) forecasting forest fire risk on a pixel basis by dynamic models, and (4) assessing vegetation functioning based on temporal autocorrelation and phenometric analysis. The results of this dissertation show the usefulness of function fitting procedures to model AS1 and AS2. Phenometrics derived from function fitting procedure makes it possible to identify cotton crop phenological stages. Spectral analysis has demonstrated quantitatively the presence of one cycle in AS2 and two in AS1 and the unimodal and bimodal behaviour of fire seasonality in the Mediterranean and temperate ecoregions respectively. Autoregressive models has been used to characterize the dynamics of fire seasonality in two ecoregions and to forecasts accurately fire risk on a pixel basis. The usefulness of temporal autocorrelation to define and characterized land surface functioning has been demonstrated. And finally the “Optical Functional Types” concept has been proposed, in this approach pixels could be as temporal unities based on its temporal dynamics or functioning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The fracture behavior parallel to the fibers of an E-glass/epoxy unidirectional laminate was studied by means of three-point tests on notched beams. Selected tests were carried out within a scanning electron microscope to ascertain the damage and fracture micromechanisms upon loading. The mechanical behavior of the notched beam was simulated within the framework of the embedded cell model, in which the actual composite microstructure was resolved in front of the notch tip. In addition, matrix and interface properties were independently measured in situ using a nanoindentor. The numerical simulations very accurately predicted the macroscopic response of the composite as well as the damage development and crack growth in front of the notch tip, demonstrating the ability of the embedded cell approach to simulate the fracture behavior of heterogeneous materials. Finally, this methodology was exploited to ascertain the influence of matrix and interface properties on the intraply toughness.