849 resultados para Propagation prediction models
Resumo:
The medium term hydropower scheduling (MTHS) problem involves an attempt to determine, for each time stage of the planning period, the amount of generation at each hydro plant which will maximize the expected future benefits throughout the planning period, while respecting plant operational constraints. Besides, it is important to emphasize that this decision-making has been done based mainly on inflow earliness knowledge. To perform the forecast of a determinate basin, it is possible to use some intelligent computational approaches. In this paper one considers the Dynamic Programming (DP) with the inflows given by their average values, thus turning the problem into a deterministic one which the solution can be obtained by deterministic DP (DDP). The performance of the DDP technique in the MTHS problem was assessed by simulation using the ensemble prediction models. Features and sensitivities of these models are discussed. © 2012 IEEE.
Resumo:
O projeto de sistemas de comunicações móveis requer o conhecimento do ambiente em que será implementado. Para tanto, busca-se a exatidão na predição de propagação do sinal através do uso de modelos de predição. O presente trabalho propõe um modelo empírico para a estimativa da intensidade de sinal recebida em ambientes indoor, e que apresentam mais de uma fonte transmissora, utilizando Sistema de Antenas Distribuídas (DAS). O método apresenta uma generalização para o caso de múltiplas fontes, também chamado multifontes, e como caso particular com apenas uma fonte. A modelagem é feita através do uso de radiais partindo de cada transmissor, com o intuito de considerar a variabilidade do sinal em ambientes indoor. Estas várias perturbações no sinal, que causam desvanecimento rápido são caracterizadas através das distribuições estatísticas de desvanecimento. As mais utilizadas como a de Rayleigh, Rice e Nakagami são apresentadas para caracterização do canal, além dessas são calculadas as distribuições recentemente desenvolvidas, de Kappa-mi e Eta-mi. Para a validação do modelo realizaram-se campanhas de medições em um shopping center, que possui um DAS, com o teste em dois ambientes distintos, um supermercado e uma praça de alimentação. As simulações são realizadas no software MATLAB® e o desempenho dos resultados avaliados através do cálculo dos erros absoluto, desvio padrão e erro rms.
Resumo:
Este trabalho trata de alguns modelos de propagação de ondas eletromagnéticas. Primeiramente, foram analisados modelos relacionados com a predição do sinal eletromagnético em ambientes indoor. Os modelos utilizados neste trabalho foram o Traçado de Raios, Caminho Dominante de Energia (DPM) e o FDTD. Para os dois primeiros modelos foi, utilizado um software comercial e para o método FDTD foi desenvolvido um algoritmo para o qual o sinal é analisado em um ambiente com a mesma geometria utilizada no software. Os resultados, para os pontos de recepção analisados, fornecidos pelos três modelos, são concordantes. Verifica-se a influência dos fenômenos de propagação na intensidade do sinal. A relevância deste trabalho encontra-se no fato de não haver, na literatura pesquisada, trabalhos que comparassem os três modelos de predição mencionados, além de propor temas para pesquisas futuras.
Resumo:
Background: Lynch syndrome (LS) is the most common form of inherited predisposition to colorectal cancer (CRC), accounting for 2-5% of all CRC. LS is an autosomal dominant disease characterized by mutations in the mismatch repair genes mutL homolog 1 (MLH1), mutS homolog 2 (MSH2), postmeiotic segregation increased 1 (PMS1), post-meiotic segregation increased 2 (PMS2) and mutS homolog 6 (MSH6). Mutation risk prediction models can be incorporated into clinical practice, facilitating the decision-making process and identifying individuals for molecular investigation. This is extremely important in countries with limited economic resources. This study aims to evaluate sensitivity and specificity of five predictive models for germline mutations in repair genes in a sample of individuals with suspected Lynch syndrome. Methods: Blood samples from 88 patients were analyzed through sequencing MLH1, MSH2 and MSH6 genes. The probability of detecting a mutation was calculated using the PREMM, Barnetson, MMRpro, Wijnen and Myriad models. To evaluate the sensitivity and specificity of the models, receiver operating characteristic curves were constructed. Results: Of the 88 patients included in this analysis, 31 mutations were identified: 16 were found in the MSH2 gene, 15 in the MLH1 gene and no pathogenic mutations were identified in the MSH6 gene. It was observed that the AUC for the PREMM (0.846), Barnetson (0.850), MMRpro (0.821) and Wijnen (0.807) models did not present significant statistical difference. The Myriad model presented lower AUC (0.704) than the four other models evaluated. Considering thresholds of >= 5%, the models sensitivity varied between 1 (Myriad) and 0.87 (Wijnen) and specificity ranged from 0 (Myriad) to 0.38 (Barnetson). Conclusions: The Barnetson, PREMM, MMRpro and Wijnen models present similar AUC. The AUC of the Myriad model is statistically inferior to the four other models.
Resumo:
The Assimilation in the Unstable Subspace (AUS) was introduced by Trevisan and Uboldi in 2004, and developed by Trevisan, Uboldi and Carrassi, to minimize the analysis and forecast errors by exploiting the flow-dependent instabilities of the forecast-analysis cycle system, which may be thought of as a system forced by observations. In the AUS scheme the assimilation is obtained by confining the analysis increment in the unstable subspace of the forecast-analysis cycle system so that it will have the same structure of the dominant instabilities of the system. The unstable subspace is estimated by Breeding on the Data Assimilation System (BDAS). AUS- BDAS has already been tested in realistic models and observational configurations, including a Quasi-Geostrophicmodel and a high dimensional, primitive equation ocean model; the experiments include both fixed and“adaptive”observations. In these contexts, the AUS-BDAS approach greatly reduces the analysis error, with reasonable computational costs for data assimilation with respect, for example, to a prohibitive full Extended Kalman Filter. This is a follow-up study in which we revisit the AUS-BDAS approach in the more basic, highly nonlinear Lorenz 1963 convective model. We run observation system simulation experiments in a perfect model setting, and with two types of model error as well: random and systematic. In the different configurations examined, and in a perfect model setting, AUS once again shows better efficiency than other advanced data assimilation schemes. In the present study, we develop an iterative scheme that leads to a significant improvement of the overall assimilation performance with respect also to standard AUS. In particular, it boosts the efficiency of regime’s changes tracking, with a low computational cost. Other data assimilation schemes need estimates of ad hoc parameters, which have to be tuned for the specific model at hand. In Numerical Weather Prediction models, tuning of parameters — and in particular an estimate of the model error covariance matrix — may turn out to be quite difficult. Our proposed approach, instead, may be easier to implement in operational models.
Resumo:
High spectral resolution radiative transfer (RT) codes are essential tools in the study of the radiative energy transfer in the Earth atmosphere and a support for the development of parameterizations for fast RT codes used in climate and weather prediction models. Cirrus clouds cover permanently 30% of the Earth's surface, representing an important contribution to the Earth-atmosphere radiation balance. The work has been focussed on the development of the RT model LBLMS. The model, widely tested in the infra-red spectral range, has been extended to the short wave spectrum and it has been used in comparison with airborne and satellite measurements to study the optical properties of cirrus clouds. A new database of single scattering properties has been developed for mid latitude cirrus clouds. Ice clouds are treated as a mixture of ice crystals with various habits. The optical properties of the mixture are tested in comparison to radiometric measurements in selected case studies. Finally, a parameterization of the mixture for application to weather prediction and global circulation models has been developed. The bulk optical properties of ice crystals are parameterized as functions of the effective dimension of measured particle size distributions that are representative of mid latitude cirrus clouds. Tests with the Limited Area Weather Prediction model COSMO have shown the impact of the new parameterization with respect to cirrus cloud optical properties based on ice spheres.
Resumo:
The instability of river bank can result in considerable human and land losses. The Po river is the most important in Italy, characterized by main banks of significant and constantly increasing height. This study presents multilayer perceptron of artificial neural network (ANN) to construct prediction models for the stability analysis of river banks along the Po River, under various river and groundwater boundary conditions. For this aim, a number of networks of threshold logic unit are tested using different combinations of the input parameters. Factor of safety (FS), as an index of slope stability, is formulated in terms of several influencing geometrical and geotechnical parameters. In order to obtain a comprehensive geotechnical database, several cone penetration tests from the study site have been interpreted. The proposed models are developed upon stability analyses using finite element code over different representative sections of river embankments. For the validity verification, the ANN models are employed to predict the FS values of a part of the database beyond the calibration data domain. The results indicate that the proposed ANN models are effective tools for evaluating the slope stability. The ANN models notably outperform the derived multiple linear regression models.
Resumo:
BACKGROUND: To develop risk-adapted prevention of psychosis, an accurate estimation of the individual risk of psychosis at a given time is needed. Inclusion of biological parameters into multilevel prediction models is thought to improve predictive accuracy of models on the basis of clinical variables. To this aim, mismatch negativity (MMN) was investigated in a sample clinically at high risk, comparing individuals with and without subsequent conversion to psychosis. METHODS: At baseline, an auditory oddball paradigm was used in 62 subjects meeting criteria of a late risk at-state who remained antipsychotic-naive throughout the study. Median follow-up period was 32 months (minimum of 24 months in nonconverters, n = 37). Repeated-measures analysis of covariance was employed to analyze the MMN recorded at frontocentral electrodes; additional comparisons with healthy controls (HC, n = 67) and first-episode schizophrenia patients (FES, n = 33) were performed. Predictive value was evaluated by a Cox regression model. RESULTS: Compared with nonconverters, duration MMN in converters (n = 25) showed significantly reduced amplitudes across the six frontocentral electrodes; the same applied in comparison with HC, but not FES, whereas the duration MMN in in nonconverters was comparable to HC and larger than in FES. A prognostic score was calculated based on a Cox regression model and stratified into two risk classes, which showed significantly different survival curves. CONCLUSIONS: Our findings demonstrate the duration MMN is significantly reduced in at-risk subjects converting to first-episode psychosis compared with nonconverters and may contribute not only to the prediction of conversion but also to a more individualized risk estimation and thus risk-adapted prevention.
Resumo:
BACKGROUND: Fever in severe chemotherapy-induced neutropenia (FN) is the most frequent manifestation of a potentially lethal complication of current intensive chemotherapy regimens. This study aimed at establishing models predicting the risk of FN, and of FN with bacteremia, in pediatric cancer patients. METHODS: In a single-centre cohort study, characteristics potentially associated with FN and episodes of FN were retrospectively extracted from charts. Poisson regression accounting for chemotherapy exposure time was used for analysis. Prediction models were constructed based on a derivation set of two thirds of observations, and validated based on the remaining third of observations. RESULTS: In 360 pediatric cancer patients diagnosed and treated for a cumulative chemotherapy exposure time of 424 years, 629 FN were recorded (1.48 FN per patient per year, 95% confidence interval (CI), 1.37-1.61), 145 of them with bacteremia (23% of FN; 0.34; 0.29-0.40). More intensive chemotherapy, shorter time since diagnosis, bone marrow involvement, central venous access device (CVAD), and prior FN were significantly and independently associated with a higher risk to develop both FN and FN with bacteremia. The prediction models explained more than 30% of the respective risks. CONCLUSIONS: The two models predicting FN and FN with bacteremia were based on five easily accessible clinical variables. Before clinical application, they need to be validated by prospective studies.
Resumo:
Although the area under the receiver operating characteristic (AUC) is the most popular measure of the performance of prediction models, it has limitations, especially when it is used to evaluate the added discrimination of a new biomarker in the model. Pencina et al. (2008) proposed two indices, the net reclassification improvement (NRI) and integrated discrimination improvement (IDI), to supplement the improvement in the AUC (IAUC). Their NRI and IDI are based on binary outcomes in case-control settings, which do not involve time-to-event outcome. However, many disease outcomes are time-dependent and the onset time can be censored. Measuring discrimination potential of a prognostic marker without considering time to event can lead to biased estimates. In this dissertation, we have extended the NRI and IDI to survival analysis settings and derived the corresponding sample estimators and asymptotic tests. Simulation studies were conducted to compare the performance of the time-dependent NRI and IDI with Pencina’s NRI and IDI. For illustration, we have applied the proposed method to a breast cancer study.^ Key words: Prognostic model, Discrimination, Time-dependent NRI and IDI ^
Resumo:
BACKGROUND Zebrafish is a clinically-relevant model of heart regeneration. Unlike mammals, it has a remarkable heart repair capacity after injury, and promises novel translational applications. Amputation and cryoinjury models are key research tools for understanding injury response and regeneration in vivo. An understanding of the transcriptional responses following injury is needed to identify key players of heart tissue repair, as well as potential targets for boosting this property in humans. RESULTS We investigated amputation and cryoinjury in vivo models of heart damage in the zebrafish through unbiased, integrative analyses of independent molecular datasets. To detect genes with potential biological roles, we derived computational prediction models with microarray data from heart amputation experiments. We focused on a top-ranked set of genes highly activated in the early post-injury stage, whose activity was further verified in independent microarray datasets. Next, we performed independent validations of expression responses with qPCR in a cryoinjury model. Across in vivo models, the top candidates showed highly concordant responses at 1 and 3 days post-injury, which highlights the predictive power of our analysis strategies and the possible biological relevance of these genes. Top candidates are significantly involved in cell fate specification and differentiation, and include heart failure markers such as periostin, as well as potential new targets for heart regeneration. For example, ptgis and ca2 were overexpressed, while usp2a, a regulator of the p53 pathway, was down-regulated in our in vivo models. Interestingly, a high activity of ptgis and ca2 has been previously observed in failing hearts from rats and humans. CONCLUSIONS We identified genes with potential critical roles in the response to cardiac damage in the zebrafish. Their transcriptional activities are reproducible in different in vivo models of cardiac injury.
Resumo:
Complexity has always been one of the most important issues in distributed computing. From the first clusters to grid and now cloud computing, dealing correctly and efficiently with system complexity is the key to taking technology a step further. In this sense, global behavior modeling is an innovative methodology aimed at understanding the grid behavior. The main objective of this methodology is to synthesize the grid's vast, heterogeneous nature into a simple but powerful behavior model, represented in the form of a single, abstract entity, with a global state. Global behavior modeling has proved to be very useful in effectively managing grid complexity but, in many cases, deeper knowledge is needed. It generates a descriptive model that could be greatly improved if extended not only to explain behavior, but also to predict it. In this paper we present a prediction methodology whose objective is to define the techniques needed to create global behavior prediction models for grid systems. This global behavior prediction can benefit grid management, specially in areas such as fault tolerance or job scheduling. The paper presents experimental results obtained in real scenarios in order to validate this approach.
Resumo:
El retroceso de las costas acantiladas es un fenómeno muy extendido sobre los litorales rocosos expuestos a la incidencia combinada de los procesos marinos y meteorológicos que se dan en la franja costera. Este fenómeno se revela violentamente como movimientos gravitacionales del terreno esporádicos, pudiendo causar pérdidas materiales y/o humanas. Aunque el conocimiento de estos riesgos de erosión resulta de vital importancia para la correcta gestión de la costa, el desarrollo de modelos predictivos se encuentra limitado desde el punto de vista geomorfológico debido a la complejidad e interacción de los procesos de desarrollo espacio-temporal que tienen lugar en la zona costera. Los modelos de predicción publicados son escasos y con importantes inconvenientes: a) extrapolación, extienden la información de registros históricos; b) empíricos, sobre registros históricos estudian la respuesta al cambio de un parámetro; c) estocásticos, determinan la cadencia y magnitud de los eventos futuros extrapolando las distribuciones de probabilidad extraídas de catálogos históricos; d) proceso-respuesta, de estabilidad y propagación del error inexplorada; e) en Ecuaciones en Derivadas Parciales, computacionalmente costosos y poco exactos. La primera parte de esta tesis detalla las principales características de los modelos más recientes de cada tipo y, para los más habitualmente utilizados, se indican sus rangos de aplicación, ventajas e inconvenientes. Finalmente como síntesis de los procesos más relevantes que contemplan los modelos revisados, se presenta un diagrama conceptual de la recesión costera, donde se recogen los procesos más influyentes que deben ser tenidos en cuenta, a la hora de utilizar o crear un modelo de recesión costera con el objetivo de evaluar la peligrosidad (tiempo/frecuencia) del fenómeno a medio-corto plazo. En esta tesis se desarrolla un modelo de proceso-respuesta de retroceso de acantilados costeros que incorpora el comportamiento geomecánico de materiales cuya resistencia a compresión no supere los 5 MPa. El modelo simula la evolución espaciotemporal de un perfil-2D del acantilado que puede estar formado por materiales heterogéneos. Para ello, se acoplan la dinámica marina: nivel medio del mar, cambios en el nivel medio del lago, mareas y oleaje; con la evolución del terreno: erosión, desprendimiento rocoso y formación de talud de derrubios. El modelo en sus diferentes variantes es capaz de incluir el análisis de la estabilidad geomecánica de los materiales, el efecto de los derrubios presentes al pie del acantilado, el efecto del agua subterránea, la playa, el run-up, cambios en el nivel medio del mar o cambios (estacionales o interanuales) en el nivel medio de la masa de agua (lagos). Se ha estudiado el error de discretización del modelo y su propagación en el tiempo a partir de las soluciones exactas para los dos primeros periodos de marea para diferentes aproximaciones numéricas tanto en tiempo como en espacio. Los resultados obtenidos han permitido justificar las elecciones que minimizan el error y los métodos de aproximación más adecuados para su posterior uso en la modelización. El modelo ha sido validado frente a datos reales en la costa de Holderness, Yorkshire, Reino Unido; y en la costa norte del lago Erie, Ontario, Canadá. Los resultados obtenidos presentan un importante avance en los modelos de recesión costera, especialmente en su relación con las condiciones geomecánicas del medio, la influencia del agua subterránea, la verticalización de los perfiles rocosos y su respuesta ante condiciones variables producidas por el cambio climático (por ejemplo, nivel medio del mar, cambios en los niveles de lago, etc.). The recession of coastal cliffs is a widespread phenomenon on the rocky shores that are exposed to the combined incidence of marine and meteorological processes that occur in the shoreline. This phenomenon is revealed violently and occasionally, as gravitational movements of the ground and can cause material or human losses. Although knowledge of the risks of erosion is vital for the proper management of the coast, the development of cliff erosion predictive models is limited by the complex interactions between environmental processes and material properties over a range of temporal and spatial scales. Published prediction models are scarce and present important drawbacks: extrapolation, that extend historical records to the future; empirical, that based on historical records studies the system response against the change in one parameter; stochastic, that represent of cliff behaviour based on assumptions regarding the magnitude and frequency of events in a probabilistic framework based on historical records; process-response, stability and error propagation unexplored; PDE´s, highly computationally expensive and not very accurate. The first part of this thesis describes the main features of the latest models of each type and, for the most commonly used, their ranges of application, advantages and disadvantages are given. Finally as a synthesis of the most relevant processes that include the revised models, a conceptual diagram of coastal recession is presented. This conceptual model includes the most influential processes that must be taken into account when using or creating a model of coastal recession to evaluate the dangerousness (time/frequency) of the phenomenon to medium-short term. A new process-response coastal recession model developed in this thesis has been designed to incorporate the behavioural and mechanical characteristics of coastal cliffs which are composed of with materials whose compressive strength is less than 5 MPa. The model simulates the spatial and temporal evolution of a cliff-2D profile that can consist of heterogeneous materials. To do so, marine dynamics: mean sea level, waves, tides, lake seasonal changes; is coupled with the evolution of land recession: erosion, cliff face failure and associated protective colluvial wedge. The model in its different variants can include analysis of material geomechanical stability, the effect of debris present at the cliff foot, groundwater effects, beach and run-up effects, changes in the mean sea level or changes (seasonal or inter-annual) in the mean lake level. Computational implementation and study of different numerical resolution techniques, in both time and space approximations, and the produced errors are exposed and analysed for the first two tidal periods. The results obtained in the errors analysis allow us to operate the model with a configuration that minimizes the error of the approximation methods. The model is validated through profile evolution assessment at various locations of coastline retreat on the Holderness Coast, Yorkshire, UK and on the north coast of Lake Erie, Ontario, Canada. The results represent an important stepforward in linking material properties to the processes of cliff recession, in considering the effect of groundwater charge and the slope oversteeping and their response to changing conditions caused by climate change (i.e. sea level, changes in lakes levels, etc.).
Resumo:
Most empirical disciplines promote the reuse and sharing of datasets, as it leads to greater possibility of replication. While this is increasingly the case in Empirical Software Engineering, some of the most popular bug-fix datasets are now known to be biased. This raises two significants concerns: first, that sample bias may lead to underperforming prediction models, and second, that the external validity of the studies based on biased datasets may be suspect. This issue has raised considerable consternation in the ESE literature in recent years. However, there is a confounding factor of these datasets that has not been examined carefully: size. Biased datasets are sampling only some of the data that could be sampled, and doing so in a biased fashion; but biased samples could be smaller, or larger. Smaller data sets in general provide less reliable bases for estimating models, and thus could lead to inferior model performance. In this setting, we ask the question, what affects performance more? bias, or size? We conduct a detailed, large-scale meta-analysis, using simulated datasets sampled with bias from a high-quality dataset which is relatively free of bias. Our results suggest that size always matters just as much bias direction, and in fact much more than bias direction when considering information-retrieval measures such as AUC and F-score. This indicates that at least for prediction models, even when dealing with sampling bias, simply finding larger samples can sometimes be sufficient. Our analysis also exposes the complexity of the bias issue, and raises further issues to be explored in the future.
Resumo:
Maximizing energy autonomy is a consistent challenge when deploying mobile robots in ionizing radiation or other hazardous environments. Having a reliable robot system is essential for successful execution of missions and to avoid manual recovery of the robots in environments that are harmful to human beings. For deployment of robots missions at short notice, the ability to know beforehand the energy required for performing the task is essential. This paper presents a on-line method for predicting energy requirements based on the pre-determined power models for a mobile robot. A small mobile robot, Khepera III is used for the experimental study and the results are promising with high prediction accuracy. The applications of the energy prediction models in energy optimization and simulations are also discussed along with examples of significant energy savings.