880 resultados para Building information modeling
Resumo:
Mixture modeling is commonly used to model categorical latent variables that represent subpopulations in which population membership is unknown but can be inferred from the data. In relatively recent years, the potential of finite mixture models has been applied in time-to-event data. However, the commonly used survival mixture model assumes that the effects of the covariates involved in failure times differ across latent classes, but the covariate distribution is homogeneous. The aim of this dissertation is to develop a method to examine time-to-event data in the presence of unobserved heterogeneity under a framework of mixture modeling. A joint model is developed to incorporate the latent survival trajectory along with the observed information for the joint analysis of a time-to-event variable, its discrete and continuous covariates, and a latent class variable. It is assumed that the effects of covariates on survival times and the distribution of covariates vary across different latent classes. The unobservable survival trajectories are identified through estimating the probability that a subject belongs to a particular class based on observed information. We applied this method to a Hodgkin lymphoma study with long-term follow-up and observed four distinct latent classes in terms of long-term survival and distributions of prognostic factors. Our results from simulation studies and from the Hodgkin lymphoma study demonstrated the superiority of our joint model compared with the conventional survival model. This flexible inference method provides more accurate estimation and accommodates unobservable heterogeneity among individuals while taking involved interactions between covariates into consideration.^
Resumo:
The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.
Resumo:
Purpose: To discuss the approach and recommendations related to the adoption of school based curriculum for violence prevention. Findings: Preliminary assessments suggest that middle and high school youth experience a variety of forms of violence in social and dating relationships. Such experiences have negative academic, behavioral and emotional consequences. Conclusions: The authors have clearly illuminated the need for addressing the phenomenon of dating violence. The field could benefit from more robust evidenced-based investigations that substantiate that interventions have an impact beyond attitudinal changes toward the behavior. Such academic endeavors will provide a platform to validate the inclusion of such information in a school based curriculum as act as a call for action for broad based interventions.
Resumo:
Development of homology modeling methods will remain an area of active research. These methods aim to develop and model increasingly accurate three-dimensional structures of yet uncrystallized therapeutically relevant proteins e.g. Class A G-Protein Coupled Receptors. Incorporating protein flexibility is one way to achieve this goal. Here, I will discuss the enhancement and validation of the ligand-steered modeling, originally developed by Dr. Claudio Cavasotto, via cross modeling of the newly crystallized GPCR structures. This method uses known ligands and known experimental information to optimize relevant protein binding sites by incorporating protein flexibility. The ligand-steered models were able to model, reasonably reproduce binding sites and the co-crystallized native ligand poses of the β2 adrenergic and Adenosine 2A receptors using a single template structure. They also performed better than the choice of template, and crude models in a small scale high-throughput docking experiments and compound selectivity studies. Next, the application of this method to develop high-quality homology models of Cannabinoid Receptor 2, an emerging non-psychotic pain management target, is discussed. These models were validated by their ability to rationalize structure activity relationship data of two, inverse agonist and agonist, series of compounds. The method was also applied to improve the virtual screening performance of the β2 adrenergic crystal structure by optimizing the binding site using β2 specific compounds. These results show the feasibility of optimizing only the pharmacologically relevant protein binding sites and applicability to structure-based drug design projects.
Resumo:
Even the best school health education programs will be unsuccessful if they are not disseminated effectively in a manner that encourages classroom adoption and implementation. This study involved two components: (1) the development of a videotape intervention to be used in the dissemination phase of a 4-year, NCI-funded diffusion study and (2) the evaluation of that videotape intervention strategy in comparison with a print (information transfer) strategy. Conceptualization has been guided by Social Learning Theory, Diffusion Theory, and communication theory. Additionally, the PRECEDE Framework has been used. Seventh and 8th grade classroom teachers from Spring Branch Independent School District in west Houston participated in the evaluation of the videotape and print interventions using a 57-item preadoption survey instrument developed by the UT Center for Health Promotion Research and Development. Two-way ANOVA was used to study individual score differences for five outcome variables: Total Scale Score (comprised of 57 predisposing, enabling, and reinforcing items), Adoption Characteristics Subscale, Attitude Toward Innovation Subscale, Receptivity Toward Innovation, and Reinforcement Subscale. The aim of the study is to compare the effect upon score differences of video and print interventions alone and in combination. Seventy-three 7th and 8th grade classroom teachers completed the study providing baseline and post-intervention measures on factors related to the adoption and implementation of tobacco-use prevention programs. Two-way ANOVA, in relation to the study questions, found significant scoring differences for those exposed to the videotape intervention alone for both the Attitude Toward Innovation Subscale and the Receptivity to Adopt Subscale. No significant results were found to suggest that print alone influences favorable scoring differences between baseline and post-intervention testing. One interaction effect was found suggesting video and print combined are more effective for influencing favorable scoring differences for the Reinforcement for the Adoption Subscale.^ This research is unique in that it represents a newly emerging field in health promotion communications research with implications for Social Learning Theory, Diffusion Theory, and communication science that are applicable to the development of improved school health interventions. ^
Resumo:
Fossil shells of planktonic foraminifera serve as the prime source of information on past changes in surface ocean conditions. Because the population size of planktonic foraminifera species changes throughout the year, the signal preserved in fossil shells is biased towards the conditions when species production was at its maximum. The amplitude of the potential seasonal bias is a function of the magnitude of the seasonal cycle in production. Here we use a planktonic foraminifera model coupled to an ecosystem model to investigate to what degree seasonal variations in production of the species Neogloboquadrina pachyderma may affect paleoceanographic reconstructions during Heinrich Stadial 1 (~18-15 cal. ka B.P.) in the North Atlantic Ocean. The model implies that during Heinrich Stadial 1 the maximum seasonal production occurred later in the year compared to the Last Glacial Maximum (~21-19 cal. ka B.P.) and the pre-industrial era north of 30 ºN. A diagnosis of the model output indicates that this change reflects the sensitivity of the species to the seasonal cycle of sea-ice cover and food supply, which collectively lead to shifts in the modeled maximum production from the Last Glacial Maximum to Heinrich Stadial 1 by up to six months. Assuming equilibrium oxygen isotopic incorporation in the shells of N. pachyderma, the modeled changes in seasonality would result in an underestimation of the actual magnitude of the meltwater isotopic signal recorded by fossil assemblages of N. pachyderma wherever calcification is likely to take place.
Resumo:
We present the data structures and algorithms used in the approach for building domain ontologies from folksonomies and linked data. In this approach we extracts domain terms from folksonomies and enrich them with semantic information from the Linked Open Data cloud. As a result, we obtain a domain ontology that combines the emergent knowledge of social tagging systems with formal knowledge from Ontologies.
Application of the Extended Kalman filter to fuzzy modeling: Algorithms and practical implementation
Resumo:
Modeling phase is fundamental both in the analysis process of a dynamic system and the design of a control system. If this phase is in-line is even more critical and the only information of the system comes from input/output data. Some adaptation algorithms for fuzzy system based on extended Kalman filter are presented in this paper, which allows obtaining accurate models without renounce the computational efficiency that characterizes the Kalman filter, and allows its implementation in-line with the process
Resumo:
This paper describes a novel architecture to introduce automatic annotation and processing of semantic sensor data within context-aware applications. Based on the well-known state-charts technologies, and represented using W3C SCXML language combined with Semantic Web technologies, our architecture is able to provide enriched higher-level semantic representations of user’s context. This capability to detect and model relevant user situations allows a seamless modeling of the actual interaction situation, which can be integrated during the design of multimodal user interfaces (also based on SCXML) for them to be adequately adapted. Therefore, the final result of this contribution can be described as a flexible context-aware SCXML-based architecture, suitable for both designing a wide range of multimodal context-aware user interfaces, and implementing the automatic enrichment of sensor data, making it available to the entire Semantic Sensor Web
Resumo:
In 2008, the City Council of Rivas-Vaciamadrid (Spain) decided to promote the construction of “Rivasecopolis”, a complex of sustainable buildings in which a new prototype of a zero-energy house would become the office of the Energy Agency. According to the initiative of the City Council, it was decided to recreate the dwelling prototype “Magic-box” which entered the 2005 Solar Decathlon Competition. The original project has been adapted to a new necessities programme, by adding the necessary spaces that allows it to work as an office. A team from university has designed and carried out the direction of the construction site. The new Solar House is conceived as a “testing building”. It is going to become the space for attending citizens in all questions about saving energy, energy efficiency and sustainable construction, having a permanent small exhibition space additional to the working places for the information purpose. At the same time, the building includes the use of experimental passive architecture systems and a monitoring and control system. Collected data will be sent to University to allow developing research work about the experimental strategies included in the building. This paper will describe and analyze the experience of transforming a prototype into a real durable building and the benefits for both university and citizens in learning about sustainability with the building
Resumo:
Here, a novel and efficient moving object detection strategy by non-parametric modeling is presented. Whereas the foreground is modeled by combining color and spatial information, the background model is constructed exclusively with color information, thus resulting in a great reduction of the computational and memory requirements. The estimation of the background and foreground covariance matrices, allows us to obtain compact moving regions while the number of false detections is reduced. Additionally, the application of a tracking strategy provides a priori knowledge about the spatial position of the moving objects, which improves the performance of the Bayesian classifier
Resumo:
Access to information and continuous education represent critical factors for physicians and researchers over the world. For African professionals, this situation is even more problematic due to the frequently difficult access to technological infrastructures and basic information. Both education and information technologies (e.g., including hardware, software or networking) are expensive and unaffordable for many African professionals. Thus, the use of e-learning and an open approach to information exchange and software use have been already proposed to improve medical informatics issues in Africa. In this context, the AFRICA BUILD project, supported by the European Commission, aims to develop a virtual platform to provide access to a wide range of biomedical informatics and learning resources to professionals and researchers in Africa. A consortium of four African and four European partners work together in this initiative. In this framework, we have developed a prototype of a cloud-computing infrastructure to demonstrate, as a proof of concept, the feasibility of this approach. We have conducted the experiment in two different locations in Africa: Burundi and Egypt. As shown in this paper, technologies such as cloud computing and the use of open source medical software for a large range of case present significant challenges and opportunities for developing countries, such as many in Africa.
Resumo:
Due to the advancement of both, information technology in general, and databases in particular; data storage devices are becoming cheaper and data processing speed is increasing. As result of this, organizations tend to store large volumes of data holding great potential information. Decision Support Systems, DSS try to use the stored data to obtain valuable information for organizations. In this paper, we use both data models and use cases to represent the functionality of data processing in DSS following Software Engineering processes. We propose a methodology to develop DSS in the Analysis phase, respective of data processing modeling. We have used, as a starting point, a data model adapted to the semantics involved in multidimensional databases or data warehouses, DW. Also, we have taken an algorithm that provides us with all the possible ways to automatically cross check multidimensional model data. Using the aforementioned, we propose diagrams and descriptions of use cases, which can be considered as patterns representing the DSS functionality, in regard to DW data processing, DW on which DSS are based. We highlight the reusability and automation benefits that this can be achieved, and we think this study can serve as a guide in the development of DSS.
Resumo:
In this paper we present a tool to perform guided HAZOP studies using a functional modeling framework: D-higraphs. It is a formalism that gathers in a single model structural (ontological) and functional information about the process considered. In this paper it is applied to an industrial case showing that the proposed methodology fits its purposes and fulfills some of the gaps and drawbacks existing in previous reported HAZOP assistant tools.
Resumo:
In this paper we present a new tool to perform guided HAZOP analyses. This tool uses a functional model of the process that merges its functional and its structural information in a natural way. The functional modeling technique used is called D-higraphs. This tool solves some of the problems and drawbacks of other existing methodologies for the automation of HAZOPs. The applicability and easy understanding of the proposed methodology is shown in an industrial case.