870 resultados para information flow model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Poor clinical handover has been associated with inaccurate clinical assessment and diagnosis, delays in diagnosis and test ordering, medication errors and decreased patient satisfaction in the acute care setting. Research on the handover process in the residential aged care sector is very limited. Purpose The aims of this study were to: (i) Develop an in-depth understanding of the handover process in aged care by mapping all the key activities and their information dynamics, (ii) Identify gaps in information exchange in the handover process and analyze implications for resident safety, (iii) Develop practical recommendations on how information communication technology (ICT) can improve the process and resident safety. Methods The study was undertaken at a large metropolitan facility in NSW with more than 300 residents and a staff including 55 registered nurses (RNs) and 146 assistants in nursing (AINs). A total of 3 focus groups, 12 interviews and 3 observation sessions were conducted over a period from July to October 2010. Process mapping was undertaken by translating the qualitative data via a five-category code book that was developed prior to the analysis. Results Three major sub-processes were identified and mapped. The three major stages are Handover process (HOP) I “Information gathering by RN”, HOP II “Preparation of preliminary handover sheet” and HOP III “Execution of handover meeting”. Inefficient processes were identified in relation to the handover including duplication of information, utilization of multiple communication modes and information sources, and lack of standardization. Conclusion By providing a robust process model of handover this study has made two critical contributions to research in aged care: (i) a means to identify important, possibly suboptimal practices; and (ii) valuable evidence to plan and improve ICT implementation in residential aged care. The mapping of this process enabled analysis of gaps in information flow and potential impacts on resident safety. In addition it offers the basis for further studies into a process that, despite its importance for securing resident safety and continuity of care, lacks research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study presents a theory of utility models based on aspiration levels, as well as the application of this theory to the planning of timber flow economics. The first part of the study comprises a derivation of the utility-theoretic basis for the application of aspiration levels. Two basic models are dealt with: the additive and the multiplicative. Applied here solely for partial utility functions, aspiration and reservation levels are interpreted as defining piecewisely linear functions. The standpoint of the choices of the decision-maker is emphasized by the use of indifference curves. The second part of the study introduces a model for the management of timber flows. The model is based on the assumption that the decision-maker is willing to specify a shape of income flow which is different from that of the capital-theoretic optimum. The utility model comprises four aspiration-based compound utility functions. The theory and the flow model are tested numerically by computations covering three forest holdings. The results show that the additive model is sensitive even to slight changes in relative importances and aspiration levels. This applies particularly to nearly linear production possibility boundaries of monetary variables. The multiplicative model, on the other hand, is stable because it generates strictly convex indifference curves. Due to a higher marginal rate of substitution, the multiplicative model implies a stronger dependence on forest management than the additive function. For income trajectory optimization, a method utilizing an income trajectory index is more efficient than one based on the use of aspiration levels per management period. Smooth trajectories can be attained by squaring the deviations of the feasible trajectories from the desired one.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Climate change is arguably the most critical issue facing our generation and the next. As we move towards a sustainable future, the grid is rapidly evolving with the integration of more and more renewable energy resources and the emergence of electric vehicles. In particular, large scale adoption of residential and commercial solar photovoltaics (PV) plants is completely changing the traditional slowly-varying unidirectional power flow nature of distribution systems. High share of intermittent renewables pose several technical challenges, including voltage and frequency control. But along with these challenges, renewable generators also bring with them millions of new DC-AC inverter controllers each year. These fast power electronic devices can provide an unprecedented opportunity to increase energy efficiency and improve power quality, if combined with well-designed inverter control algorithms. The main goal of this dissertation is to develop scalable power flow optimization and control methods that achieve system-wide efficiency, reliability, and robustness for power distribution networks of future with high penetration of distributed inverter-based renewable generators.

Proposed solutions to power flow control problems in the literature range from fully centralized to fully local ones. In this thesis, we will focus on the two ends of this spectrum. In the first half of this thesis (chapters 2 and 3), we seek optimal solutions to voltage control problems provided a centralized architecture with complete information. These solutions are particularly important for better understanding the overall system behavior and can serve as a benchmark to compare the performance of other control methods against. To this end, we first propose a branch flow model (BFM) for the analysis and optimization of radial and meshed networks. This model leads to a new approach to solve optimal power flow (OPF) problems using a two step relaxation procedure, which has proven to be both reliable and computationally efficient in dealing with the non-convexity of power flow equations in radial and weakly-meshed distribution networks. We will then apply the results to fast time- scale inverter var control problem and evaluate the performance on real-world circuits in Southern California Edison’s service territory.

The second half (chapters 4 and 5), however, is dedicated to study local control approaches, as they are the only options available for immediate implementation on today’s distribution networks that lack sufficient monitoring and communication infrastructure. In particular, we will follow a reverse and forward engineering approach to study the recently proposed piecewise linear volt/var control curves. It is the aim of this dissertation to tackle some key problems in these two areas and contribute by providing rigorous theoretical basis for future work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Sharing of epidemiological and clinical data sets among researchers is poor at best, in detriment of science and community at large. The purpose of this paper is therefore to (1) describe a novel Web application designed to share information on study data sets focusing on epidemiological clinical research in a collaborative environment and (2) create a policy model placing this collaborative environment into the current scientific social context. METHODOLOGY: The Database of Databases application was developed based on feedback from epidemiologists and clinical researchers requiring a Web-based platform that would allow for sharing of information about epidemiological and clinical study data sets in a collaborative environment. This platform should ensure that researchers can modify the information. A Model-based predictions of number of publications and funding resulting from combinations of different policy implementation strategies (for metadata and data sharing) were generated using System Dynamics modeling. PRINCIPAL FINDINGS: The application allows researchers to easily upload information about clinical study data sets, which is searchable and modifiable by other users in a wiki environment. All modifications are filtered by the database principal investigator in order to maintain quality control. The application has been extensively tested and currently contains 130 clinical study data sets from the United States, Australia, China and Singapore. Model results indicated that any policy implementation would be better than the current strategy, that metadata sharing is better than data-sharing, and that combined policies achieve the best results in terms of publications. CONCLUSIONS: Based on our empirical observations and resulting model, the social network environment surrounding the application can assist epidemiologists and clinical researchers contribute and search for metadata in a collaborative environment, thus potentially facilitating collaboration efforts among research communities distributed around the globe.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When studying heterogeneous aquifer systems, especially at regional scale, a degree of generalization is anticipated. This can be due to sparse sampling regimes, complex depositional environments or lack of accessibility to measure the subsurface. This can lead to an inaccurate conceptualization which can be detrimental when applied to groundwater flow models. It is important that numerical models are based on observed and accurate geological information and do not rely on the distribution of artificial aquifer properties. This can still be problematic as data will be modelled at a different scale to which it was collected. It is proposed here that integrating geophysics and upscaling techniques can assist in a more realistic and deterministic groundwater flow model. In this study, the sedimentary aquifer of the Lagan Valley in Northern Ireland is chosen due to intruding sub-vertical dolerite dykes. These dykes are of a lower permeability than the sandstone aquifer. The use of airborne magnetics allows the delineation of heterogeneities, confirmed by field analysis. Permeability measured at the field scale is then upscaled to different levels using a correlation with the geophysical data, creating equivalent parameters that can be directly imported into numerical groundwater flow models. These parameters include directional equivalent permeabilities and anisotropy. Several stages of upscaling are modelled in finite element. Initial modelling is providing promising results, especially at the intermediate scale, suggesting an accurate distribution of aquifer properties. This deterministic based methodology is being expanded to include stochastic methods of obtaining heterogeneity location based on airborne geophysical data. This is through the Direct Sample method of Multiple-Point Statistics (MPS). This method uses the magnetics as a training image to computationally determine a probabilistic occurrence of heterogeneity. There is also a need to apply the method to alternate geological contexts where the heterogeneity is of a higher permeability than the host rock.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Background To understand the molecular mechanisms underlying important biological processes, a detailed description of the gene products networks involved is required. In order to define and understand such molecular networks, some statistical methods are proposed in the literature to estimate gene regulatory networks from time-series microarray data. However, several problems still need to be overcome. Firstly, information flow need to be inferred, in addition to the correlation between genes. Secondly, we usually try to identify large networks from a large number of genes (parameters) originating from a smaller number of microarray experiments (samples). Due to this situation, which is rather frequent in Bioinformatics, it is difficult to perform statistical tests using methods that model large gene-gene networks. In addition, most of the models are based on dimension reduction using clustering techniques, therefore, the resulting network is not a gene-gene network but a module-module network. Here, we present the Sparse Vector Autoregressive model as a solution to these problems. Results We have applied the Sparse Vector Autoregressive model to estimate gene regulatory networks based on gene expression profiles obtained from time-series microarray experiments. Through extensive simulations, by applying the SVAR method to artificial regulatory networks, we show that SVAR can infer true positive edges even under conditions in which the number of samples is smaller than the number of genes. Moreover, it is possible to control for false positives, a significant advantage when compared to other methods described in the literature, which are based on ranks or score functions. By applying SVAR to actual HeLa cell cycle gene expression data, we were able to identify well known transcription factor targets. Conclusion The proposed SVAR method is able to model gene regulatory networks in frequent situations in which the number of samples is lower than the number of genes, making it possible to naturally infer partial Granger causalities without any a priori information. In addition, we present a statistical test to control the false discovery rate, which was not previously possible using other gene regulatory network models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this research is to investigate the consequences of sharing or using information generated in one phase of the project to subsequent life cycle phases. Sometimes the assumptions supporting the information change, and at other times the context within which the information was created changes in a way that causes the information to become invalid. Often these inconsistencies are not discovered till the damage has occurred. This study builds on previous research that proposed a framework based on the metaphor of ‘ecosystems’ to model such inconsistencies in the 'supply chain' of life cycle information (Brokaw and Mukherjee, 2012). The outcome of such inconsistencies often results in litigation. Therefore, this paper studies a set of legal cases that resulted from inconsistencies in life cycle information, within the ecosystems framework. For each project, the errant information type, creator and user of the information and their relationship, time of creation and usage of the information in the life cycle of the project are investigated to assess the causes of failure of precise and accurate information flow as well as the impact of such failures in later stages of the project. The analysis shows that the misleading information is mostly due to lack of collaboration. Besides, in all the studied cases, lack of compliance checking, imprecise data and insufficient clarifications hinder accurate and smooth flow of information. The paper presents findings regarding the bottleneck of the information flow process during the design, construction and post construction phases. It also highlights the role of collaboration as well as information integration and management during the project life cycle and presents a baseline for improvement in information supply chain through the life cycle of the project.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lava flow modeling can be a powerful tool in hazard assessments; however, the ability to produce accurate models is usually limited by a lack of high resolution, up-to-date Digital Elevation Models (DEMs). This is especially obvious in places such as Kilauea Volcano (Hawaii), where active lava flows frequently alter the terrain. In this study, we use a new technique to create high resolution DEMs on Kilauea using synthetic aperture radar (SAR) data from the TanDEM-X (TDX) satellite. We convert raw TDX SAR data into a geocoded DEM using GAMMA software [Werner et al., 2000]. This process can be completed in several hours and permits creation of updated DEMs as soon as new TDX data are available. To test the DEMs, we use the Harris and Rowland [2001] FLOWGO lava flow model combined with the Favalli et al. [2005] DOWNFLOW model to simulate the 3-15 August 2011 eruption on Kilauea's East Rift Zone. Results were compared with simulations using the older, lower resolution 2000 SRTM DEM of Hawaii. Effusion rates used in the model are derived from MODIS thermal infrared satellite imagery. FLOWGO simulations using the TDX DEM produced a single flow line that matched the August 2011 flow almost perfectly, but could not recreate the entire flow field due to the relatively high DEM noise level. The issues with short model flow lengths can be resolved by filtering noise from the DEM. Model simulations using the outdated SRTM DEM produced a flow field that followed a different trajectory to that observed. Numerous lava flows have been emplaced at Kilauea since the creation of the SRTM DEM, leading the model to project flow lines in areas that have since been covered by fresh lava flows. These results show that DEMs can quickly become outdated on active volcanoes, but our new technique offers the potential to produce accurate, updated DEMs for modeling lava flow hazards.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Water-conducting faults and fractures were studied in the granite-hosted A¨ spo¨ Hard Rock Laboratory (SE Sweden). On a scale of decametres and larger, steeply dipping faults dominate and contain a variety of different fault rocks (mylonites, cataclasites, fault gouges). On a smaller scale, somewhat less regular fracture patterns were found. Conceptual models of the fault and fracture geometries and of the properties of rock types adjacent to fractures were derived and used as input for the modelling of in situ dipole tracer tests that were conducted in the framework of the Tracer Retention Understanding Experiment (TRUE-1) on a scale of metres. After the identification of all relevant transport and retardation processes, blind predictions of the breakthroughs of conservative to moderately sorbing tracers were calculated and then compared with the experimental data. This paper provides the geological basis and model calibration, while the predictive and inverse modelling work is the topic of the companion paper [J. Contam. Hydrol. 61 (2003) 175]. The TRUE-1 experimental volume is highly fractured and contains the same types of fault rocks and alterations as on the decametric scale. The experimental flow field was modelled on the basis of a 2D-streamtube formalism with an underlying homogeneous and isotropic transmissivity field. Tracer transport was modelled using the dual porosity medium approach, which is linked to the flow model by the flow porosity. Given the substantial pumping rates in the extraction borehole, the transport domain has a maximum width of a few centimetres only. It is concluded that both the uncertainty with regard to the length of individual fractures and the detailed geometry of the network along the flowpath between injection and extraction boreholes are not critical because flow is largely one-dimensional, whether through a single fracture or a network. Process identification and model calibration were based on a single uranine breakthrough (test PDT3), which clearly showed that matrix diffusion had to be included in the model even over the short experimental time scales, evidenced by a characteristic shape of the trailing edge of the breakthrough curve. Using the geological information and therefore considering limited matrix diffusion into a thin fault gouge horizon resulted in a good fit to the experiment. On the other hand, fresh granite was found not to interact noticeably with the tracers over the time scales of the experiments. While fracture-filling gouge materials are very efficient in retarding tracers over short periods of time (hours–days), their volume is very small and, with time progressing, retardation will be dominated by altered wall rock and, finally, by fresh granite. In such rocks, both porosity (and therefore the effective diffusion coefficient) and sorption Kds are more than one order of magnitude smaller compared to fault gouge, thus indicating that long-term retardation is expected to occur but to be less pronounced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Credit-rationing model similar to Stiglitz and Weiss [1981] is combined with the information externality model of Lang and Nakamura [1993] to examine the properties of mortgage markets characterized by both adverse selection and information externalities. In a credit-rationing model, additional information increases lenders ability to distinguish risks, which leads to increased supply of credit. According to Lang and Nakamura, larger supply of credit leads to additional market activities and therefore, greater information. The combination of these two propositions leads to a general equilibrium model. This paper describes properties of this general equilibrium model. The paper provides another sufficient condition in which credit rationing falls with information. In that, external information improves the accuracy of equity-risk assessments of properties, which reduces credit rationing. Contrary to intuition, this increased accuracy raises the mortgage interest rate. This allows clarifying the trade offs associated with reduced credit rationing and the quality of applicant pool.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this research was to apply model checking by using a symbolic model checker on Predicate Transition Nets (PrT Nets). A PrT Net is a formal model of information flow which allows system properties to be modeled and analyzed. The aim of this thesis was to use the modeling and analysis power of PrT nets to provide a mechanism for the system model to be verified. Symbolic Model Verifier (SMV) was the model checker chosen in this thesis, and in order to verify the PrT net model of a system, it was translated to SMV input language. A software tool was implemented which translates the PrT Net into SMV language, hence enabling the process of model checking. The system includes two parts: the PrT net editor where the representation of a system can be edited, and the translator which converts the PrT net into an SMV program.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An information filtering (IF) system monitors an incoming document stream to find the documents that match the information needs specified by the user profiles. To learn to use the user profiles effectively is one of the most challenging tasks when developing an IF system. With the document selection criteria better defined based on the users’ needs, filtering large streams of information can be more efficient and effective. To learn the user profiles, term-based approaches have been widely used in the IF community because of their simplicity and directness. Term-based approaches are relatively well established. However, these approaches have problems when dealing with polysemy and synonymy, which often lead to an information overload problem. Recently, pattern-based approaches (or Pattern Taxonomy Models (PTM) [160]) have been proposed for IF by the data mining community. These approaches are better at capturing sematic information and have shown encouraging results for improving the effectiveness of the IF system. On the other hand, pattern discovery from large data streams is not computationally efficient. Also, these approaches had to deal with low frequency pattern issues. The measures used by the data mining technique (for example, “support” and “confidences”) to learn the profile have turned out to be not suitable for filtering. They can lead to a mismatch problem. This thesis uses the rough set-based reasoning (term-based) and pattern mining approach as a unified framework for information filtering to overcome the aforementioned problems. This system consists of two stages - topic filtering and pattern mining stages. The topic filtering stage is intended to minimize information overloading by filtering out the most likely irrelevant information based on the user profiles. A novel user-profiles learning method and a theoretical model of the threshold setting have been developed by using rough set decision theory. The second stage (pattern mining) aims at solving the problem of the information mismatch. This stage is precision-oriented. A new document-ranking function has been derived by exploiting the patterns in the pattern taxonomy. The most likely relevant documents were assigned higher scores by the ranking function. Because there is a relatively small amount of documents left after the first stage, the computational cost is markedly reduced; at the same time, pattern discoveries yield more accurate results. The overall performance of the system was improved significantly. The new two-stage information filtering model has been evaluated by extensive experiments. Tests were based on the well-known IR bench-marking processes, using the latest version of the Reuters dataset, namely, the Reuters Corpus Volume 1 (RCV1). The performance of the new two-stage model was compared with both the term-based and data mining-based IF models. The results demonstrate that the proposed information filtering system outperforms significantly the other IF systems, such as the traditional Rocchio IF model, the state-of-the-art term-based models, including the BM25, Support Vector Machines (SVM), and Pattern Taxonomy Model (PTM).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The increasing diversity of the Internet has created a vast number of multilingual resources on the Web. A huge number of these documents are written in various languages other than English. Consequently, the demand for searching in non-English languages is growing exponentially. It is desirable that a search engine can search for information over collections of documents in other languages. This research investigates the techniques for developing high-quality Chinese information retrieval systems. A distinctive feature of Chinese text is that a Chinese document is a sequence of Chinese characters with no space or boundary between Chinese words. This feature makes Chinese information retrieval more difficult since a retrieved document which contains the query term as a sequence of Chinese characters may not be really relevant to the query since the query term (as a sequence Chinese characters) may not be a valid Chinese word in that documents. On the other hand, a document that is actually relevant may not be retrieved because it does not contain the query sequence but contains other relevant words. In this research, we propose two approaches to deal with the problems. In the first approach, we propose a hybrid Chinese information retrieval model by incorporating word-based techniques with the traditional character-based techniques. The aim of this approach is to investigate the influence of Chinese segmentation on the performance of Chinese information retrieval. Two ranking methods are proposed to rank retrieved documents based on the relevancy to the query calculated by combining character-based ranking and word-based ranking. Our experimental results show that Chinese segmentation can improve the performance of Chinese information retrieval, but the improvement is not significant if it incorporates only Chinese segmentation with the traditional character-based approach. In the second approach, we propose a novel query expansion method which applies text mining techniques in order to find the most relevant words to extend the query. Unlike most existing query expansion methods, which generally select the highly frequent indexing terms from the retrieved documents to expand the query. In our approach, we utilize text mining techniques to find patterns from the retrieved documents that highly correlate with the query term and then use the relevant words in the patterns to expand the original query. This research project develops and implements a Chinese information retrieval system for evaluating the proposed approaches. There are two stages in the experiments. The first stage is to investigate if high accuracy segmentation can make an improvement to Chinese information retrieval. In the second stage, a text mining based query expansion approach is implemented and a further experiment has been done to compare its performance with the standard Rocchio approach with the proposed text mining based query expansion method. The NTCIR5 Chinese collections are used in the experiments. The experiment results show that by incorporating the text mining based query expansion with the hybrid model, significant improvement has been achieved in both precision and recall assessments.