863 resultados para Gaussian Plume model for multiple sources foe Cochin
Resumo:
Humans can be exposed to multiple chemicals at once from a variety of sources, and human risk assessment of multiple chemicals poses several challenges to scientists, risk assessors and risk managers. Ingestion of food is considered a major route of exposure to many contaminants, namely mycotoxins, especially for vulnerable population groups, as children. A lack of sufficient data regarding mycotoxins children risk assessment, could contribute to an inaccuracy of the estimated risk. Efforts must be undertaken to develop initiatives that promote a broad overview of multiple mycotoxins risk assessment. The present work, developed within the MYCOMIX project, aims to assess the risk associated to the exposure of Portuguese children (< 3 years old) to multiple mycotoxins through consumption of foods primarily marketed for this age group. A holistic approach was developed applying deterministic and probabilistic tools to the calculation of mycotoxin daily intake values, integrating children food consumption (3-days food diary), mycotoxins occurrence (HPLC-UV, HPLC-FD, LC-MS/MS and GC-MS), bioaccessibility (standardized in vitro digestion model) and toxicological data (in vitro evaluation of cytotoxicity, genotoxicity and intestinal impact). A case study concerning Portuguese children exposure to patulin (PAT) and ochratoxin A (OTA), two mycotoxins co-occurring in processed cereal-based foods (PCBF) marketed in Portugal, was developed. Main results showed that there is low concern from a public health point of view relatively to PAT and OTA Portuguese children exposure through consumption of PCBF, considering the estimated daily intakes of these two mycotoxins (worst case scenarios, 22.930 ng/kg bw/day and 0.402 ng/kg bw/day, for PAT and OTA, respectively), their bioaccessibility and toxicology results. However, the present case study only concerns the risk associated with the consumption of PCBF and child diet include several other foods. The present work underlines the need to adopt a holistic approach for multiple mycotoxins risk assessment integrating data from exposure, bioacessibility and toxicity domains in order to contribute to a more accurate risk assessment.
Resumo:
Nowadays, risks arising from the rapid development of oil and gas industries are significantly increasing. As a result, one of the main concerns of either industrial or environmental managers is the identification and assessment of such risks in order to develop and maintain appropriate proactive measures. Oil spill from stationary sources in offshore zones is one of the accidents resulting in several adverse impacts on marine ecosystems. Considering a site's current situation and relevant requirements and standards, risk assessment process is not only capable of recognizing the probable causes of accidents but also of estimating the probability of occurrence and the severity of consequences. In this way, results of risk assessment would help managers and decision makers create and employ proper control methods. Most of the represented models for risk assessment of oil spills are achieved on the basis of accurate data bases and analysis of historical data, but unfortunately such data bases are not accessible in most of the zones, especially in developing countries, or else they are newly established and not applicable yet. This issue reveals the necessity of using Expert Systems and Fuzzy Set Theory. By using such systems it will be possible to formulize the specialty and experience of several experts and specialists who have been working in petroliferous areas for several years. On the other hand, in developing countries often the damages to environment and environmental resources are not considered as risk assessment priorities and they are approximately under-estimated. For this reason, the proposed model in this research is specially addressing the environmental risk of oil spills from stationary sources in offshore zones.
Resumo:
Objectives Dietary fibre (DF) is one of the components of diet that strongly contributes to health improvements, particularly on the gastrointestinal system. Hence, this work intended to evaluate the relations between some sociodemographic variables such as age, gender, level of education, living environment or country on the levels of knowledge about dietary fibre (KADF), its sources and its effects on human health, using a validated scale. Study design The present study was a cross-sectional study. Methods A methodological study was conducted with 6010 participants, residing in 10 countries from different continents (Europe, America, Africa). The instrument was a questionnaire of self-response, aimed at collecting information on knowledge about food fibres. The instrument was used to validate a scale (KADF) which model was used in the present work to identify the best predictors of knowledge. The statistical tools used were as follows: basic descriptive statistics, decision trees, inferential analysis (t-test for independent samples with Levene test and one-way ANOVA with multiple comparisons post hoc tests). Results The results showed that the best predictor for the three types of knowledge evaluated (about DF, about its sources and about its effects on human health) was always the country, meaning that the social, cultural and/or political conditions greatly determine the level of knowledge. On the other hand, the tests also showed that statistically significant differences were encountered regarding the three types of knowledge for all sociodemographic variables evaluated: age, gender, level of education, living environment and country. Conclusions The results showed that to improve the level of knowledge the actions planned should not be delineated in general as to reach all sectors of the populations, and that in addressing different people, different methodologies must be designed so as to provide an effective health education.
Resumo:
International audience
Resumo:
We study a climatologically important interaction of two of the main components of the geophysical system by adding an energy balance model for the averaged atmospheric temperature as dynamic boundary condition to a diagnostic ocean model having an additional spatial dimension. In this work, we give deeper insight than previous papers in the literature, mainly with respect to the 1990 pioneering model by Watts and Morantine. We are taking into consideration the latent heat for the two phase ocean as well as a possible delayed term. Non-uniqueness for the initial boundary value problem, uniqueness under a non-degeneracy condition and the existence of multiple stationary solutions are proved here. These multiplicity results suggest that an S-shaped bifurcation diagram should be expected to occur in this class of models generalizing previous energy balance models. The numerical method applied to the model is based on a finite volume scheme with nonlinear weighted essentially non-oscillatory reconstruction and Runge–Kutta total variation diminishing for time integration.
Resumo:
Simarouba glauca, a non-edible oilseed crop native to South Florida, is gaining popularity as a feedstock for the production of biodiesel. The University of Agriculture Sciences in Bangalore, India has developed a biodiesel production model based on the principles of decentralization, small scales, and multiple fuel sources. Success of such a program depends on conversion efficiencies at multiple stages. The conversion efficiency of the field-level, decentralized production model was compared with the in-laboratory conversion efficiency benchmark. The study indicated that the field-level model conversion efficiency was less than that of the lab-scale set up. The fuel qualities and characteristics of the Simarouba glauca biodiesel were tested and found to be the standards required for fuel designation. However, this research suggests that for Simarouba glauca to be widely accepted as a biodiesel feedstock further investigation is still required.
Resumo:
Effective management of invasive fishes depends on the availability of updated information about their distribution and spatial dispersion. Forensic analysis was performed using online and published data on the European catfish, Silurus glanis L., a recent invader in the Tagus catchment (Iberian Peninsula). Eighty records were obtained mainly from anglers’ fora and blogs, and more recently from www.youtube.com. Since the first record in 1998, S. glanis expanded its geographic range by 700 km of river network, occurring mainly in reservoirs and in high-order reaches. Human-mediated and natural dispersal events were identified, with the former occurring during the first years of invasion and involving movements of >50 km. Downstream dispersal directionality was predominant. The analysis of online data from anglers was found to provide useful information on the distribution and dispersal patterns of this non-native fish, and is potentially applicable as a preliminary, exploratory assessment tool for other non-native fishes.
Resumo:
With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.
Resumo:
This project is an extension of a previous CRC project (220-059-B) which developed a program for life prediction of gutters in Queensland schools. A number of sources of information on service life of metallic building components were formed into databases linked to a Case-Based Reasoning Engine which extracted relevant cases from each source.
Resumo:
This project is an extension of a previous CRC project (220-059-B) which developed a program for life prediction of gutters in Queensland schools. A number of sources of information on service life of metallic building components were formed into databases linked to a Case-Based Reasoning Engine which extracted relevant cases from each source.
Resumo:
In this paper, the stability of an autonomous microgrid with multiple distributed generators (DG) is studied through eigenvalue analysis. It is assumed that all the DGs are connected through Voltage Source Converter (VSC) and all connected loads are passive. The VSCs are controlled by state feedback controller to achieve desired voltage and current outputs that are decided by a droop controller. The state space models of each of the converters with its associated feedback are derived. These are then connected with the state space models of the droop, network and loads to form a homogeneous model, through which the eigenvalues are evaluated. The system stability is then investigated as a function of the droop controller real and reac-tive power coefficients. These observations are then verified through simulation studies using PSCAD/EMTDC. It will be shown that the simulation results closely agree with stability be-havior predicted by the eigenvalue analysis.
Resumo:
The early stages of the building design process are when the most far reaching decisions are made regarding the configuration of the proposed project. This paper examines methods of providing decision support to building designers across multiple disciplines during the early stage of design. The level of detail supported is at the massing study stage where the basic envelope of the project is being defined. The block outlines on the building envelope are sliced into floors. Within a floor the only spatial divisions supported are the “user” space and the building core. The building core includes vertical transportation systems, emergency egress and vertical duct runs. The current focus of the project described in the paper is multi-storey mixed use office/residential buildings with car parking. This is a common type of building in redevelopment projects within and adjacent to the central business districts of major Australian cities. The key design parameters for system selection across the major systems in multi-storey building projects - architectural, structural, HVAC, vertical transportation, electrical distribution, fire protection, hydraulics and cost – are examined. These have been identified through literature research and discussions with building designers from various disciplines. This information is being encoded in decision support tools. The decision support tools communicate through a shared database to ensure that the relevant information is shared across all of the disciplines. An internal data model has been developed to support the very early design phase and the high level system descriptions required. A mapping to IFC 2x2 has also been defined to ensure that this early information is available at later stages of the design process.
Resumo:
There is currently a strong focus worldwide on the potential of large-scale Electronic Health Record (EHR) systems to cut costs and improve patient outcomes through increased efficiency. This is accomplished by aggregating medical data from isolated Electronic Medical Record databases maintained by different healthcare providers. Concerns about the privacy and reliability of Electronic Health Records are crucial to healthcare service consumers. Traditional security mechanisms are designed to satisfy confidentiality, integrity, and availability requirements, but they fail to provide a measurement tool for data reliability from a data entry perspective. In this paper, we introduce a Medical Data Reliability Assessment (MDRA) service model to assess the reliability of medical data by evaluating the trustworthiness of its sources, usually the healthcare provider which created the data and the medical practitioner who diagnosed the patient and authorised entry of this data into the patient’s medical record. The result is then expressed by manipulating health record metadata to alert medical practitioners relying on the information to possible reliability problems.
Resumo:
Electronic Health Record (EHR) systems are being introduced to overcome the limitations associated with paper-based and isolated Electronic Medical Record (EMR) systems. This is accomplished by aggregating medical data and consolidating them in one digital repository. Though an EHR system provides obvious functional benefits, there is a growing concern about the privacy and reliability (trustworthiness) of Electronic Health Records. Security requirements such as confidentiality, integrity, and availability can be satisfied by traditional hard security mechanisms. However, measuring data trustworthiness from the perspective of data entry is an issue that cannot be solved with traditional mechanisms, especially since degrees of trust change over time. In this paper, we introduce a Time-variant Medical Data Trustworthiness (TMDT) assessment model to evaluate the trustworthiness of medical data by evaluating the trustworthiness of its sources, namely the healthcare organisation where the data was created and the medical practitioner who diagnosed the patient and authorised entry of this data into the patient’s medical record, with respect to a certain period of time. The result can then be used by the EHR system to manipulate health record metadata to alert medical practitioners relying on the information to possible reliability problems.