971 resultados para relative utility models
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Systematic errors can have a significant effect on GPS observable. In medium and long baselines the major systematic error source are the ionosphere and troposphere refraction and the GPS satellites orbit errors. But, in short baselines, the multipath is more relevant. These errors degrade the accuracy of the positioning accomplished by GPS. So, this is a critical problem for high precision GPS positioning applications. Recently, a method has been suggested to mitigate these errors: the semiparametric model and the penalised least squares technique. It uses a natural cubic spline to model the errors as a function which varies smoothly in time. The systematic errors functions, ambiguities and station coordinates, are estimated simultaneously. As a result, the ambiguities and the station coordinates are estimated with better reliability and accuracy than the conventional least square method.
Resumo:
We describe and begin to evaluate a parameterization to include the vertical transport of hot gases and particles emitted from biomass burning in low resolution atmospheric-chemistry transport models. This sub-grid transport mechanism is simulated by embedding a 1-D cloud-resolving model with appropriate lower boundary conditions in each column of the 3-D host model. Through assimilation of remote sensing fire products, we recognize which columns have fires. Using a land use dataset appropriate fire properties are selected. The host model provides the environmental conditions, allowing the plume rise to be simulated explicitly. The derived height of the plume is then used in the source emission field of the host model to determine the effective injection height, releasing the material emitted during the flaming phase at this height. Model results are compared with CO aircraft profiles from an Amazon basin field campaign and with satellite data, showing the huge impact that this mechanism has on model performance. We also show the relative role of each main vertical transport mechanisms, shallow and deep moist convection and the pyro-convection (dry or moist) induced by vegetation fires, on the distribution of biomass burning CO emissions in the troposphere.
Resumo:
The GPS observables are subject to several errors. Among them, the systematic ones have great impact, because they degrade the accuracy of the accomplished positioning. These errors are those related, mainly, to GPS satellites orbits, multipath and atmospheric effects. Lately, a method has been suggested to mitigate these errors: the semiparametric model and the penalised least squares technique (PLS). In this method, the errors are modeled as functions varying smoothly in time. It is like to change the stochastic model, in which the errors functions are incorporated, the results obtained are similar to those in which the functional model is changed. As a result, the ambiguities and the station coordinates are estimated with better reliability and accuracy than the conventional least square method (CLS). In general, the solution requires a shorter data interval, minimizing costs. The method performance was analyzed in two experiments, using data from single frequency receivers. The first one was accomplished with a short baseline, where the main error was the multipath. In the second experiment, a baseline of 102 km was used. In this case, the predominant errors were due to the ionosphere and troposphere refraction. In the first experiment, using 5 minutes of data collection, the largest coordinates discrepancies in relation to the ground truth reached 1.6 cm and 3.3 cm in h coordinate for PLS and the CLS, respectively, in the second one, also using 5 minutes of data, the discrepancies were 27 cm in h for the PLS and 175 cm in h for the CLS. In these tests, it was also possible to verify a considerable improvement in the ambiguities resolution using the PLS in relation to the CLS, with a reduced data collection time interval. © Springer-Verlag Berlin Heidelberg 2007.
Resumo:
Nowadays, with the expansion of the reference stations networks, several positioning techniques have been developed and/or improved. Among them, the VRS (Virtual Reference Station) concept has been very used. In this paper the goal is to generate VRS data in a modified technique. In the proposed methodology the DD (double difference) ambiguities are not computed. The network correction terms are obtained using only atmospheric (ionospheric and tropospheric) models. In order to carry out the experiments it was used data of five reference stations from the GPS Active Network of West of São Paulo State and an extra station. To evaluate the VRS data quality it was used three different strategies: PPP (Precise Point Positioning) and Relative Positioning in static and kinematic modes, and DGPS (Differential GPS). Furthermore, the VRS data were generated in the position of a real reference station. The results provided by the VRS data agree quite well with those of the real file data.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Trichoepithelioma is a benign neoplasm that shares both clinical and histological features with basal cell carcinoma. It is important to distinguish these neoplasms because they require different clinical behavior and therapeutic planning. Many studies have addressed the use of immunohistochemistry to improve the differential diagnosis of these tumors. These studies present conflicting results when addressing the same markers, probably owing to the small number of basaloid tumors that comprised their studies, which generally did not exceed 50 cases. We built a tissue microarray with 162 trichoepithelioma and 328 basal cell carcinoma biopsies and tested a panel of immune markers composed of CD34, CD10, epithelial membrane antigen, Bcl-2, cytokeratins 15 and 20 and D2-40. The results were analyzed using multiple linear and logistic regression models. This analysis revealed a model that could differentiate trichoepithelioma from basal cell carcinoma in 36% of the cases. The panel of immunohistochemical markers required to differentiate between these tumors was composed of CD10, cytokeratin 15, cytokeratin 20 and D2-40. The results obtained in this work were generated from a large number of biopsies and resulted in the confirmation of overlapping epithelial and stromal immunohistochemical profiles from these basaloid tumors. The results also corroborate the point of view that trichoepithelioma and basal cell carcinoma tumors represent two different points in the differentiation of a single cell type. Despite the use of panels of immune markers, histopathological criteria associated with clinical data certainly remain the best guideline for the differential diagnosis of trichoepithelioma and basal cell carcinoma. Modern Pathology (2012) 25, 1345-1353; doi: 10.1038/modpathol.2012.96; published online 8 June 2012
Resumo:
Statistical methods have been widely employed to assess the capabilities of credit scoring classification models in order to reduce the risk of wrong decisions when granting credit facilities to clients. The predictive quality of a classification model can be evaluated based on measures such as sensitivity, specificity, predictive values, accuracy, correlation coefficients and information theoretical measures, such as relative entropy and mutual information. In this paper we analyze the performance of a naive logistic regression model (Hosmer & Lemeshow, 1989) and a logistic regression with state-dependent sample selection model (Cramer, 2004) applied to simulated data. Also, as a case study, the methodology is illustrated on a data set extracted from a Brazilian bank portfolio. Our simulation results so far revealed that there is no statistically significant difference in terms of predictive capacity between the naive logistic regression models and the logistic regression with state-dependent sample selection models. However, there is strong difference between the distributions of the estimated default probabilities from these two statistical modeling techniques, with the naive logistic regression models always underestimating such probabilities, particularly in the presence of balanced samples. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Dengue is considered one of the most important vector-borne infection, affecting almost half of the world population with 50 to 100 million cases every year. In this paper, we present one of the simplest models that can encapsulate all the important variables related to vector control of dengue fever. The model considers the human population, the adult mosquito population and the population of immature stages, which includes eggs, larvae and pupae. The model also considers the vertical transmission of dengue in the mosquitoes and the seasonal variation in the mosquito population. From this basic model describing the dynamics of dengue infection, we deduce thresholds for avoiding the introduction of the disease and for the elimination of the disease. In particular, we deduce a Basic Reproduction Number for dengue that includes parameters related to the immature stages of the mosquito. By neglecting seasonal variation, we calculate the equilibrium values of the model’s variables. We also present a sensitivity analysis of the impact of four vector-control strategies on the Basic Reproduction Number, on the Force of Infection and on the human prevalence of dengue. Each of the strategies was studied separately from the others. The analysis presented allows us to conclude that of the available vector control strategies, adulticide application is the most effective, followed by the reduction of the exposure to mosquito bites, locating and destroying breeding places and, finally, larvicides. Current vector-control methods are concentrated on mechanical destruction of mosquitoes’ breeding places. Our results suggest that reducing the contact between vector and hosts (biting rates) is as efficient as the logistically difficult but very efficient adult mosquito’s control.
Parametric Sensitivity Analysis of the Most Recent Computational Models of Rabbit Cardiac Pacemaking
Resumo:
The cellular basis of cardiac pacemaking activity, and specifically the quantitative contributions of particular mechanisms, is still debated. Reliable computational models of sinoatrial nodal (SAN) cells may provide mechanistic insights, but competing models are built from different data sets and with different underlying assumptions. To understand quantitative differences between alternative models, we performed thorough parameter sensitivity analyses of the SAN models of Maltsev & Lakatta (2009) and Severi et al (2012). Model parameters were randomized to generate a population of cell models with different properties, simulations performed with each set of random parameters generated 14 quantitative outputs that characterized cellular activity, and regression methods were used to analyze the population behavior. Clear differences between the two models were observed at every step of the analysis. Specifically: (1) SR Ca2+ pump activity had a greater effect on SAN cell cycle length (CL) in the Maltsev model; (2) conversely, parameters describing the funny current (If) had a greater effect on CL in the Severi model; (3) changes in rapid delayed rectifier conductance (GKr) had opposite effects on action potential amplitude in the two models; (4) within the population, a greater percentage of model cells failed to exhibit action potentials in the Maltsev model (27%) compared with the Severi model (7%), implying greater robustness in the latter; (5) confirming this initial impression, bifurcation analyses indicated that smaller relative changes in GKr or Na+-K+ pump activity led to failed action potentials in the Maltsev model. Overall, the results suggest experimental tests that can distinguish between models and alternative hypotheses, and the analysis offers strategies for developing anti-arrhythmic pharmaceuticals by predicting their effect on the pacemaking activity.
Resumo:
We have used kinematic models in two Italian regions to reproduce surface interseismic velocities obtained from InSAR and GPS measurements. We have considered a Block modeling, BM, approach to evaluate which fault system is actively accommodating the occurring deformation in both considered areas. We have performed a study for the Umbria-Marche Apennines, obtaining that the tectonic extension observed by GPS measurements is explained by the active contribution of at least two fault systems, one of which is the Alto Tiberina fault, ATF. We have estimated also the interseismic coupling distribution for the ATF using a 3D surface and the result shows an interesting correlation between the microseismicity and the uncoupled fault portions. The second area analyzed concerns the Gargano promontory for which we have used jointly the available InSAR and GPS velocities. Firstly we have attached the two datasets to the same terrestrial reference frame and then using a simple dislocation approach, we have estimated the best fault parameters reproducing the available data, providing a solution corresponding to the Mattinata fault. Subsequently we have considered within a BM analysis both GPS and InSAR datasets in order to evaluate if the Mattinata fault may accommodate the deformation occurring in the central Adriatic due to the relative motion between the North-Adriatic and South-Adriatic plates. We obtain that the deformation occurring in that region should be accommodated by more that one fault system, that is however difficult to detect since the poor coverage of geodetic measurement offshore of the Gargano promontory. Finally we have performed also the estimate of the interseismic coupling distribution for the Mattinata fault, obtaining a shallow coupling pattern. Both of coupling distributions found using the BM approach have been tested by means of resolution checkerboard tests and they demonstrate that the coupling patterns depend on the geodetic data positions.
Resumo:
Model based calibration has gained popularity in recent years as a method to optimize increasingly complex engine systems. However virtually all model based techniques are applied to steady state calibration. Transient calibration is by and large an emerging technology. An important piece of any transient calibration process is the ability to constrain the optimizer to treat the problem as a dynamic one and not as a quasi-static process. The optimized air-handling parameters corresponding to any instant of time must be achievable in a transient sense; this in turn depends on the trajectory of the same parameters over previous time instances. In this work dynamic constraint models have been proposed to translate commanded to actually achieved air-handling parameters. These models enable the optimization to be realistic in a transient sense. The air handling system has been treated as a linear second order system with PD control. Parameters for this second order system have been extracted from real transient data. The model has been shown to be the best choice relative to a list of appropriate candidates such as neural networks and first order models. The selected second order model was used in conjunction with transient emission models to predict emissions over the FTP cycle. It has been shown that emission predictions based on air-handing parameters predicted by the dynamic constraint model do not differ significantly from corresponding emissions based on measured air-handling parameters.
Resumo:
Atmospheric aerosols affect both global and regional climate by altering the radiative balance of the atmosphere and acting as cloud condensation nuclei. Despite an increased focus on the research of atmospheric aerosols due to concerns about global climate change, current methods to observe the morphology of aerosols and to measure their hygroscopic properties are limited in various ways by experimental procedure. The primary objectives of this thesis were to use atomic force microscopy to determine the morphology of atmospherically relevant aerosols and to investigate theutility of environmental atomic force microscopy for imaging aerosols as they respond to changes in relative humidity. Traditional aerosol generation and collection techniques were used in conjunction with atomic force microscopy to image commonorganic and inorganic aerosols. In addition, environmental AFM was used to image aerosols at a variety of relative humidity values. The results of this research demonstrated the utility of atomic force microscopy for measuring the morphology of aerosols. In addition, the utility of environmental AFM for measuring the hygroscopic properties of aerosols was demonstrated. Further research in this area will lead to an increased understanding of the role oforganic and inorganic aerosols in the atmosphere, allowing for the effects of anthropogenic aerosol emissions to be quantified and for more accurate climate models to be developed.
Resumo:
Investigators interested in whether a disease aggregates in families often collect case-control family data, which consist of disease status and covariate information for families selected via case or control probands. Here, we focus on the use of case-control family data to investigate the relative contributions to the disease of additive genetic effects (A), shared family environment (C), and unique environment (E). To this end, we describe a ACE model for binary family data and then introduce an approach to fitting the model to case-control family data. The structural equation model, which has been described previously, combines a general-family extension of the classic ACE twin model with a (possibly covariate-specific) liability-threshold model for binary outcomes. Our likelihood-based approach to fitting involves conditioning on the proband’s disease status, as well as setting prevalence equal to a pre-specified value that can be estimated from the data themselves if necessary. Simulation experiments suggest that our approach to fitting yields approximately unbiased estimates of the A, C, and E variance components, provided that certain commonly-made assumptions hold. These assumptions include: the usual assumptions for the classic ACE and liability-threshold models; assumptions about shared family environment for relative pairs; and assumptions about the case-control family sampling, including single ascertainment. When our approach is used to fit the ACE model to Austrian case-control family data on depression, the resulting estimate of heritability is very similar to those from previous analyses of twin data.