960 resultados para application to medical science


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Retrospective clinical datasets are often characterized by a relatively small sample size and many missing data. In this case, a common way for handling the missingness consists in discarding from the analysis patients with missing covariates, further reducing the sample size. Alternatively, if the mechanism that generated the missing allows, incomplete data can be imputed on the basis of the observed data, avoiding the reduction of the sample size and allowing methods to deal with complete data later on. Moreover, methodologies for data imputation might depend on the particular purpose and might achieve better results by considering specific characteristics of the domain. The problem of missing data treatment is studied in the context of survival tree analysis for the estimation of a prognostic patient stratification. Survival tree methods usually address this problem by using surrogate splits, that is, splitting rules that use other variables yielding similar results to the original ones. Instead, our methodology consists in modeling the dependencies among the clinical variables with a Bayesian network, which is then used to perform data imputation, thus allowing the survival tree to be applied on the completed dataset. The Bayesian network is directly learned from the incomplete data using a structural expectation–maximization (EM) procedure in which the maximization step is performed with an exact anytime method, so that the only source of approximation is due to the EM formulation itself. On both simulated and real data, our proposed methodology usually outperformed several existing methods for data imputation and the imputation so obtained improved the stratification estimated by the survival tree (especially with respect to using surrogate splits).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are many uncertainties in forecasting the charging and discharging capacity required by electric vehicles (EVs) often as a consequence of stochastic usage and intermittent travel. In terms of large-scale EV integration in future power networks this paper develops a capacity forecasting model which considers eight particular uncertainties in three categories. Using the model, a typical application of EVs to load levelling is presented and exemplified using a UK 2020 case study. The results presented in this paper demonstrate that the proposed model is accurate for charge and discharge prediction and a feasible basis for steady-state analysis required for large-scale EV integration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the transport of phonons between N harmonic oscillators in contact with independent thermal baths and coupled to a common oscillator, and derive an expression for the steady state heat flow between the oscillators in the weak coupling limit. We apply these results to an optomechanical array consisting of a pair of mechanical resonators coupled to a single quantized electromagnetic field mode by radiation pressure as well as to thermal baths with different temperatures. In the weak coupling limit this system is shown to be equivalent to two mutually-coupled harmonic oscillators in contact with an effective common thermal bath in addition to their independent baths. The steady state occupation numbers and heat flows are derived and discussed in various regimes of interest.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate line formation processes in Type IIb supernovae (SNe) from 100 to 500 days post-explosion using spectral synthesis calculations. The modelling identifies the nuclear burning layers and physical mechanisms that produce the major emission lines, and the diagnostic potential of these. We compare the model calculations with data on the three best observed Type IIb SNe to-date - SN 1993J, SN 2008ax, and SN 2011dh. Oxygen nucleosynthesis depends sensitively on the main-sequence mass of the star and modelling of the [O I] lambda lambda 6300, 6364 lines constrains the progenitors of these three SNe to the M-ZAMS = 12-16 M-circle dot range (ejected oxygen masses 0.3-0.9 M-circle dot), with SN 2011dh towards the lower end and SN 1993J towards the upper end of the range. The high ejecta masses from M-ZAMS greater than or similar to 17 M-circle dot progenitors give rise to brighter nebular phase emission lines than observed. Nucleosynthesis analysis thus supports a scenario of low-to-moderate mass progenitors for Type IIb SNe, and by implication an origin in binary systems. We demonstrate how oxygen and magnesium recombination lines may be combined to diagnose the magnesium mass in the SN ejecta. For SN 2011dh, a magnesium mass of 0.02-0.14 M-circle dot is derived, which gives a Mg/O production ratio consistent with the solar value. Nitrogen left in the He envelope from CNO burning gives strong [N II] lambda lambda 6548, 6583 emission lines that dominate over Ha emission in our models. The hydrogen envelopes of Type IIb SNe are too small and dilute to produce any noticeable H alpha emission or absorption after similar to 150 days, and nebular phase emission seen around 6550 angstrom is in many cases likely caused by [N II] lambda lambda 6548, 6583. Finally, the influence of radiative transport on the emergent line profiles is investigated. Significant line blocking in the metal core remains for several hundred days, which affects the emergent spectrum. These radiative transfer effects lead to early-time blueshifts of the emission line peaks, which gradually disappear as the optical depths decrease with time. The modelled evolution of this effect matches the observed evolution in SN 2011dh.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a recent paper (Automatica 49 (2013) 2860–2866), the Wirtinger-based inequality has been introduced to derive tractable stability conditions for time-delay or sampled-data systems. We point out that there exist two errors in Theorem 8 for the stability analysis of sampled-data systems, and the correct theorem is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: HIV testing is a cornerstone of efforts to combat the HIV epidemic, and testing conducted as part of surveillance provides invaluable data on the spread of infection and the effectiveness of campaigns to reduce the transmission of HIV. However, participation in HIV testing can be low, and if respondents systematically select not to be tested because they know or suspect they are HIV positive (and fear disclosure), standard approaches to deal with missing data will fail to remove selection bias. We implemented Heckman-type selection models, which can be used to adjust for missing data that are not missing at random, and established the extent of selection bias in a population-based HIV survey in an HIV hyperendemic community in rural South Africa.

Methods: We used data from a population-based HIV survey carried out in 2009 in rural KwaZulu-Natal, South Africa. In this survey, 5565 women (35%) and 2567 men (27%) provided blood for an HIV test. We accounted for missing data using interviewer identity as a selection variable which predicted consent to HIV testing but was unlikely to be independently associated with HIV status. Our approach involved using this selection variable to examine the HIV status of residents who would ordinarily refuse to test, except that they were allocated a persuasive interviewer. Our copula model allows for flexibility when modelling the dependence structure between HIV survey participation and HIV status.

Results: For women, our selection model generated an HIV prevalence estimate of 33% (95% CI 27–40) for all people eligible to consent to HIV testing in the survey. This estimate is higher than the estimate of 24% generated when only information from respondents who participated in testing is used in the analysis, and the estimate of 27% when imputation analysis is used to predict missing data on HIV status. For men, we found an HIV prevalence of 25% (95% CI 15–35) using the selection model, compared to 16% among those who participated in testing, and 18% estimated with imputation. We provide new confidence intervals that correct for the fact that the relationship between testing and HIV status is unknown and requires estimation.

Conclusions: We confirm the feasibility and value of adopting selection models to account for missing data in population-based HIV surveys and surveillance systems. Elements of survey design, such as interviewer identity, present the opportunity to adopt this approach in routine applications. Where non-participation is high, true confidence intervals are much wider than those generated by standard approaches to dealing with missing data suggest.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Explore the free OPAL resources including surveys from air quality, water quality, tree health, biodiversity of hedgerows, soil and earthwork survey and bug count survey

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The area and power consumption of low-density parity check (LDPC) decoders are typically dominated by embedded memories. To alleviate such high memory costs, this paper exploits the fact that all internal memories of a LDPC decoder are frequently updated with new data. These unique memory access statistics are taken advantage of by replacing all static standard-cell based memories (SCMs) of a prior-art LDPC decoder implementation by dynamic SCMs (D-SCMs), which are designed to retain data just long enough to guarantee reliable operation. The use of D-SCMs leads to a 44% reduction in silicon area of the LDPC decoder compared to the use of static SCMs. The low-power LDPC decoder architecture with refresh-free D-SCMs was implemented in a 90nm CMOS process, and silicon measurements show full functionality and an information bit throughput of up to 600 Mbps (as required by the IEEE 802.11n standard).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electron-impact excitation collision strengths for transitions between all singly excited levels up to the n = 4 shell of helium-Eke argon and the n = 4 and 5 shells of helium-like iron have been calculated using a radiation-damped R-matrix approach. The theoretical collision strengths have been examined and associated with their infinite-energy limit values to allow the preparation of Maxwell-averaged effective collision strengths. These are conservatively considered to be accurate to within 20% at all temperatures, 3 x 10(5)-3 x 10(8) K forAr(16+) and 10(6)-10(9) K for Fe24+. They have been compared with the results of previous studies, where possible, and we find a broad accord. The corresponding rate coefficients are required for use in the calculation of derived, collisional-radiative, effective emission coefficients for helium-like lines for diagnostic application to fusion and astrophysical plasmas. The uncertainties in the fundamental collision data have been used to provide a critical assessment of the expected resultant uncertainties in such derived data, including redistributive and cascade collisional-radiative effects. The consequential uncertainties in the parts of the effective emission coefficients driven by excitation from the ground levels for the key w, x, y and z lines vary between 5% and 10%. Our results remove an uncertainty in the reaction rates of a key class of atomic processes governing the spectral emission of helium-like ions in plasmas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The exponential growth of the world population has led to an increase of settlements often located in areas prone to natural disasters, including earthquakes. Consequently, despite the important advances in the field of natural catastrophes modelling and risk mitigation actions, the overall human losses have continued to increase and unprecedented economic losses have been registered. In the research work presented herein, various areas of earthquake engineering and seismology are thoroughly investigated, and a case study application for mainland Portugal is performed. Seismic risk assessment is a critical link in the reduction of casualties and damages due to earthquakes. Recognition of this relation has led to a rapid rise in demand for accurate, reliable and flexible numerical tools and software. In the present work, an open-source platform for seismic hazard and risk assessment is developed. This software is capable of computing the distribution of losses or damage for an earthquake scenario (deterministic event-based) or earthquake losses due to all the possible seismic events that might occur within a region for a given interval of time (probabilistic event-based). This effort has been developed following an open and transparent philosophy and therefore, it is available to any individual or institution. The estimation of the seismic risk depends mainly on three components: seismic hazard, exposure and vulnerability. The latter component assumes special importance, as by intervening with appropriate retrofitting solutions, it may be possible to decrease directly the seismic risk. The employment of analytical methodologies is fundamental in the assessment of structural vulnerability, particularly in regions where post-earthquake building damage might not be available. Several common methodologies are investigated, and conclusions are yielded regarding the method that can provide an optimal balance between accuracy and computational effort. In addition, a simplified approach based on the displacement-based earthquake loss assessment (DBELA) is proposed, which allows for the rapid estimation of fragility curves, considering a wide spectrum of uncertainties. A novel vulnerability model for the reinforced concrete building stock in Portugal is proposed in this work, using statistical information collected from hundreds of real buildings. An analytical approach based on nonlinear time history analysis is adopted and the impact of a set of key parameters investigated, including the damage state criteria and the chosen intensity measure type. A comprehensive review of previous studies that contributed to the understanding of the seismic hazard and risk for Portugal is presented. An existing seismic source model was employed with recently proposed attenuation models to calculate probabilistic seismic hazard throughout the territory. The latter results are combined with information from the 2011 Building Census and the aforementioned vulnerability model to estimate economic loss maps for a return period of 475 years. These losses are disaggregated across the different building typologies and conclusions are yielded regarding the type of construction more vulnerable to seismic activity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis focuses on the application of optimal alarm systems to non linear time series models. The most common classes of models in the analysis of real-valued and integer-valued time series are described. The construction of optimal alarm systems is covered and its applications explored. Considering models with conditional heteroscedasticity, particular attention is given to the Fractionally Integrated Asymmetric Power ARCH, FIAPARCH(p; d; q) model and an optimal alarm system is implemented, following both classical and Bayesian methodologies. Taking into consideration the particular characteristics of the APARCH(p; q) representation for financial time series, the introduction of a possible counterpart for modelling time series of counts is proposed: the INteger-valued Asymmetric Power ARCH, INAPARCH(p; q). The probabilistic properties of the INAPARCH(1; 1) model are comprehensively studied, the conditional maximum likelihood (ML) estimation method is applied and the asymptotic properties of the conditional ML estimator are obtained. The final part of the work consists on the implementation of an optimal alarm system to the INAPARCH(1; 1) model. An application is presented to real data series.