906 resultados para Autoregressive-Moving Average model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, reconstruction of three-dimensional (3D) patient-specific models of a hip joint from two-dimensional (2D) calibrated X-ray images is addressed. Existing 2D-3D reconstruction techniques usually reconstruct a patient-specific model of a single anatomical structure without considering the relationship to its neighboring structures. Thus, when those techniques would be applied to reconstruction of patient-specific models of a hip joint, the reconstructed models may penetrate each other due to narrowness of the hip joint space and hence do not represent a true hip joint of the patient. To address this problem we propose a novel 2D-3D reconstruction framework using an articulated statistical shape model (aSSM). Different from previous work on constructing an aSSM, where the joint posture is modeled as articulation in a training set via statistical analysis, here it is modeled as a parametrized rotation of the femur around the joint center. The exact rotation of the hip joint as well as the patient-specific models of the joint structures, i.e., the proximal femur and the pelvis, are then estimated by optimally fitting the aSSM to a limited number of calibrated X-ray images. Taking models segmented from CT data as the ground truth, we conducted validation experiments on both plastic and cadaveric bones. Qualitatively, the experimental results demonstrated that the proposed 2D-3D reconstruction framework preserved the hip joint structure and no model penetration was found. Quantitatively, average reconstruction errors of 1.9 mm and 1.1 mm were found for the pelvis and the proximal femur, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Correct predictions of future blood glucose levels in individuals with Type 1 Diabetes (T1D) can be used to provide early warning of upcoming hypo-/hyperglycemic events and thus to improve the patient's safety. To increase prediction accuracy and efficiency, various approaches have been proposed which combine multiple predictors to produce superior results compared to single predictors. Three methods for model fusion are presented and comparatively assessed. Data from 23 T1D subjects under sensor-augmented pump (SAP) therapy were used in two adaptive data-driven models (an autoregressive model with output correction - cARX, and a recurrent neural network - RNN). Data fusion techniques based on i) Dempster-Shafer Evidential Theory (DST), ii) Genetic Algorithms (GA), and iii) Genetic Programming (GP) were used to merge the complimentary performances of the prediction models. The fused output is used in a warning algorithm to issue alarms of upcoming hypo-/hyperglycemic events. The fusion schemes showed improved performance with lower root mean square errors, lower time lags, and higher correlation. In the warning algorithm, median daily false alarms (DFA) of 0.25%, and 100% correct alarms (CA) were obtained for both event types. The detection times (DT) before occurrence of events were 13.0 and 12.1 min respectively for hypo-/hyperglycemic events. Compared to the cARX and RNN models, and a linear fusion of the two, the proposed fusion schemes represents a significant improvement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tropical forests are carbon-dense and highly productive ecosystems. Consequently, they play an important role in the global carbon cycle. In the present study we used an individual-based forest model (FORMIND) to analyze the carbon balances of a tropical forest. The main processes of this model are tree growth, mortality, regeneration, and competition. Model parameters were calibrated using forest inventory data from a tropical forest at Mt. Kilimanjaro. The simulation results showed that the model successfully reproduces important characteristics of tropical forests (aboveground biomass, stem size distribution and leaf area index). The estimated aboveground biomass (385 t/ha) is comparable to biomass values in the Amazon and other tropical forests in Africa. The simulated forest reveals a gross primary production of 24 tcha-1yr-1. Modeling above- and belowground carbon stocks, we analyzed the carbon balance of the investigated tropical forest. The simulated carbon balance of this old-growth forest is zero on average. This study provides an example of how forest models can be used in combination with forest inventory data to investigate forest structure and local carbon balances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An axisymmetric, elastic pipe is filled with an incompressible fluid and is immersed in a second, coaxial rigid pipe which contains the same fluid. A pressure pulse in the outer fluid annulus deforms the elastic pipe which invokes a fluid motion in the fluid core. It is the aim of this study to investigate streaming phenomena in the core which may originate from such a fluid-structure interaction. This work presents a numerical solver for such a configuration. It was developed in the OpenFOAM software environment and is based on the Arbitrary Lagrangian Eulerian (ALE) approach for moving meshes. The solver features a monolithic integration of the one-dimensional, coupled system between the elastic structure and the outer fluid annulus into a dynamic boundary condition for the moving surface of the fluid core. Results indicate that our configuration may serve as a mechanical model of the Tullio Phenomenon (sound-induced vertigo).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large uncertainties exist concerning the impact of Greenland ice sheet melting on the Atlantic meridional overturning circulation (AMOC) in the future, partly due to different sensitivity of the AMOC to freshwater input in the North Atlantic among climate models. Here we analyse five projections from different coupled ocean–atmosphere models with an additional 0.1 Sv (1 Sv = 10 6 m3/s) of freshwater released around Greenland between 2050 and 2089. We find on average a further weakening of the AMOC at 26°N of 1.1 ± 0.6 Sv representing a 27 ± 14% supplementary weakening in 2080–2089, as compared to the weakening relative to 2006–2015 due to the effect of the external forcing only. This weakening is lower than what has been found with the same ensemble of models in an identical experimen - tal set-up but under recent historical climate conditions. This lower sensitivity in a warmer world is explained by two main factors. First, a tendency of decoupling is detected between the surface and the deep ocean caused by an increased thermal stratification in the North Atlantic under the effect of global warming. This induces a shoaling of ocean deep ventilation through convection hence ventilating only intermediate levels. The second important effect concerns the so-called Canary Current freshwater leakage; a process by which additionally released fresh water in the North Atlantic leaks along the Canary Current and escapes the convection zones towards the subtropical area. This leakage is increasing in a warming climate, which is a consequence of decreasing gyres asymmetry due to changes in Ekman rumping. We suggest that these modifications are related with the northward shift of the jet stream in a warmer world. For these two reasons the AMOC is less susceptible to freshwater perturbations (near the deep water formation sides) in the North Atlantic as compared to the recent historical climate conditions. Finally, we propose a bilinear model that accounts for the two former processes to give a conceptual explanation about the decreasing AMOC sensitivity due to freshwater input. Within the limit of this bilinear model, we find that 62 ± 8% of the reduction in sensitivity is related with the changes in gyre asymmetry and freshwater leakage and 38 ± 8% is due to the reduction in deep ocean ventilation associated with the increased stratification in the North Atlantic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Initializing the ocean for decadal predictability studies is a challenge, as it requires reconstructing the little observed subsurface trajectory of ocean variability. In this study we explore to what extent surface nudging using well-observed sea surface temperature (SST) can reconstruct the deeper ocean variations for the 1949–2005 period. An ensemble made with a nudged version of the IPSLCM5A model and compared to ocean reanalyses and reconstructed datasets. The SST is restored to observations using a physically-based relaxation coefficient, in contrast to earlier studies, which use a much larger value. The assessment is restricted to the regions where the ocean reanalyses agree, i.e. in the upper 500 m of the ocean, although this can be latitude and basin dependent. Significant reconstruction of the subsurface is achieved in specific regions, namely region of subduction in the subtropical Atlantic, below the thermocline in the equatorial Pacific and, in some cases, in the North Atlantic deep convection regions. Beyond the mean correlations, ocean integrals are used to explore the time evolution of the correlation over 20-year windows. Classical fixed depth heat content diagnostics do not exhibit any significant reconstruction between the different existing bservation-based references and can therefore not be used to assess global average time-varying correlations in the nudged simulations. Using the physically based average temperature above an isotherm (14°C) alleviates this issue in the tropics and subtropics and shows significant reconstruction of these quantities in the nudged simulations for several decades. This skill is attributed to the wind stress reconstruction in the tropics, as already demonstrated in a perfect model study using the same model. Thus, we also show here the robustness of this result in an historical and observational context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A detailed characterization of air quality in the megacity of Paris (France) during two 1-month intensive campaigns and from additional 1-year observations revealed that about 70% of the urban background fine particulate matter (PM) is transported on average into the megacity from upwind regions. This dominant influence of regional sources was confirmed by in situ measurements during short intensive and longer-term campaigns, aerosol optical depth (AOD) measurements from ENVISAT, and modeling results from PMCAMx and CHIMERE chemistry transport models. While advection of sulfate is well documented for other megacities, there was surprisingly high contribution from long-range transport for both nitrate and organic aerosol. The origin of organic PM was investigated by comprehensive analysis of aerosol mass spectrometer (AMS), radiocarbon and tracer measurements during two intensive campaigns. Primary fossil fuel combustion emissions constituted less than 20%in winter and 40%in summer of carbonaceous fine PM, unexpectedly small for a megacity. Cooking activities and, during winter, residential wood burning are the major primary organic PM sources. This analysis suggests that the major part of secondary organic aerosol is of modern origin, i.e., from biogenic precursors and from wood burning. Black carbon concentrations are on the lower end of values encountered in megacities worldwide, but still represent an issue for air quality. These comparatively low air pollution levels are due to a combination of low emissions per inhabitant, flat terrain, and a meteorology that is in general not conducive to local pollution build-up. This revised picture of a megacity only being partially responsible for its own average and peak PM levels has important implications for air pollution regulation policies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Trabecular bone is a porous mineralized tissue playing a major load bearing role in the human body. Prediction of age-related and disease-related fractures and the behavior of bone implant systems needs a thorough understanding of its structure-mechanical property relationships, which can be obtained using microcomputed tomography-based finite element modeling. In this study, a nonlinear model for trabecular bone as a cohesive-frictional material was implemented in a large-scale computational framework and validated by comparison of μFE simulations with experimental tests in uniaxial tension and compression. A good correspondence of stiffness and yield points between simulations and experiments was found for a wide range of bone volume fraction and degree of anisotropy in both tension and compression using a non-calibrated, average set of material parameters. These results demonstrate the ability of the model to capture the effects leading to failure of bone for three anatomical sites and several donors, which may be used to determine the apparent behavior of trabecular bone and its evolution with age, disease, and treatment in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE The implementation of genomic-based medicine is hindered by unresolved questions regarding data privacy and delivery of interpreted results to health-care practitioners. We used DNA-based prediction of HIV-related outcomes as a model to explore critical issues in clinical genomics. METHODS We genotyped 4,149 markers in HIV-positive individuals. Variants allowed for prediction of 17 traits relevant to HIV medical care, inference of patient ancestry, and imputation of human leukocyte antigen (HLA) types. Genetic data were processed under a privacy-preserving framework using homomorphic encryption, and clinical reports describing potentially actionable results were delivered to health-care providers. RESULTS A total of 230 patients were included in the study. We demonstrated the feasibility of encrypting a large number of genetic markers, inferring patient ancestry, computing monogenic and polygenic trait risks, and reporting results under privacy-preserving conditions. The average execution time of a multimarker test on encrypted data was 865 ms on a standard computer. The proportion of tests returning potentially actionable genetic results ranged from 0 to 54%. CONCLUSIONS The model of implementation presented herein informs on strategies to deliver genomic test results for clinical care. Data encryption to ensure privacy helps to build patient trust, a key requirement on the road to genomic-based medicine.Genet Med advance online publication 14 January 2016Genetics in Medicine (2016); doi:10.1038/gim.2015.167.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Docetaxel (Taxotere(®) ) is currently used intravenously as an anticancer agent and is primarily metabolized by Cytochrome P450 3A (CYP3A). The HIV protease inhibitor ritonavir, a strong CYP3A4 inhibitor, decreased first-pass metabolism of orally administered docetaxel. Anticancer effects of ritonavir itself have also been described. We here aimed to test whether ritonavir co-administration could decrease intratumoral metabolism of intravenously administered docetaxel and thus increase the antitumor activity of docetaxel in an orthotopic, immunocompetent mouse model for breast cancer. Spontaneously arising K14cre;Brca1(F/F) ;p53(F/F) mouse mammary tumors were orthotopically implanted in syngeneic mice lacking Cyp3a (Cyp3a(-/-) ) to limit ritonavir effects on systemic docetaxel clearance. Over 3 weeks, docetaxel (20 mg/kg) was administered intravenously once weekly, with or without ritonavir (12.5 mg/kg) administered orally for 5 days per week. Untreated mice were used as control for tumor growth. Ritonavir treatment alone did not significantly affect the median time of survival (14 vs. 10 days). Median time of survival in docetaxel-treated mice was 54 days. Ritonavir co-treatment significantly increased this to 66 days, and substantially reduced relative average tumor size, without altering tumor histology. Concentrations of the major docetaxel metabolite M2 in tumor tissue were reduced by ritonavir co-administration, whereas tumor RNA expression of Cyp3a was unaltered. In this breast cancer model, we observed no direct antitumor effect of ritonavir alone, but we found enhanced efficacy of docetaxel treatment when combined with ritonavir. Our data, therefore, suggest that decreased docetaxel metabolism inside the tumor as a result of Cyp3a inhibition contributes to increased antitumor activity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In addition to cognitive decline, individuals affected by Alzheimer's disease (AD) can experience important neuropsychiatric symptoms including sleep disturbances. We characterized the sleep-wake cycle in the TgCRND8 mouse model of AD, which overexpresses a mutant human form of amyloid precursor protein resulting in high levels of β-amyloid and plaque formation by 3 months of age. Polysomnographic recordings in freely-moving mice were conducted to study sleep-wake cycle architecture at 3, 7 and 11 months of age and corresponding levels of β-amyloid in brain regions regulating sleep-wake states were measured. At all ages, TgCRND8 mice showed increased wakefulness and reduced non-rapid eye movement (NREM) sleep during the resting and active phases. Increased wakefulness in TgCRND8 mice was accompanied by a shift in the waking power spectrum towards fast frequency oscillations in the beta (14-20 Hz) and low gamma range (20-50 Hz). Given the phenotype of hyperarousal observed in TgCRND8 mice, the role of noradrenergic transmission in the promotion of arousal, and previous work reporting an early disruption of the noradrenergic system in TgCRND8, we tested the effects of the alpha-1-adrenoreceptor antagonist, prazosin, on sleep-wake patterns in TgCRND8 and non-transgenic (NTg) mice. We found that a lower dose (2 mg/kg) of prazosin increased NREM sleep in NTg but not in TgCRND8 mice, whereas a higher dose (5 mg/kg) increased NREM sleep in both genotypes, suggesting altered sensitivity to noradrenergic blockade in TgCRND8 mice. Collectively our results demonstrate that amyloidosis in TgCRND8 mice is associated with sleep-wake cycle dysfunction, characterized by hyperarousal, validating this model as a tool towards understanding the relationship between β-amyloid overproduction and disrupted sleep-wake patterns in AD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research project sought to answer the primary research question: What occurs when the music program in a church changes its emphasis from performance to education? This qualitative study of a church choir included participant observation of Wednesday evening and Sunday morning rehearsals over a 12 week period, individual interviews, group interviews, written responses, and written and visual assessment of musical skills. The goal was a rich description of the participants and emerging themes resulting from the shift in emphasis. Analysis of data occurred through inductive processing. Data was initially coded and then the codes were categorized into sub-themes, and finally into major themes. Early analysis of the data began with reflection in a researcher journal. Following the completion of the study the journal was entered into a word processor, as were transcriptions of videotaped rehearsals, and written reflections from the participants. After all data had been reviewed repeatedly and entered into the word processor, it was coded, reexamined, and finally categorized into sub-themes and themes. After coding and identification of major themes and sub-themes the finding were challenged by looking for disconfirming evidence. Finally, after the completion of the analysis stage, member checks were conducted. The results of the analysis of data revealed themes that could be associated either with the choir or the director. The key themes primarily associated with the choir were: Response to the change in rehearsal format; Attitude toward learning; Appropriateness of community learning model; and, Member's perceptions of the results of the program. The key themes associated with the director were identified as: Conductor assuming the role of educator; Conductor recognizing the choir as learners; Conductor treating rehearsals as a time for teaching and learning; and, Conductor's perception of the effectiveness of the change in focus. The study concluded that a change in focus from performance to education did not noticeably improve the sound of the choir after twelve-weeks. There were however, indications that improvements were being made by the individual members. Further study of the effects over a longer period of time is recommended.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective. To measure the demand for primary care and its associated factors by building and estimating a demand model of primary care in urban settings.^ Data source. Secondary data from 2005 California Health Interview Survey (CHIS 2005), a population-based random-digit dial telephone survey, conducted by the UCLA Center for Health Policy Research in collaboration with the California Department of Health Services, and the Public Health Institute between July 2005 and April 2006.^ Study design. A literature review was done to specify the demand model by identifying relevant predictors and indicators. CHIS 2005 data was utilized for demand estimation.^ Analytical methods. The probit regression was used to estimate the use/non-use equation and the negative binomial regression was applied to the utilization equation with the non-negative integer dependent variable.^ Results. The model included two equations in which the use/non-use equation explained the probability of making a doctor visit in the past twelve months, and the utilization equation estimated the demand for primary conditional on at least one visit. Among independent variables, wage rate and income did not affect the primary care demand whereas age had a negative effect on demand. People with college and graduate educational level were associated with 1.03 (p < 0.05) and 1.58 (p < 0.01) more visits, respectively, compared to those with no formal education. Insurance was significantly and positively related to the demand for primary care (p < 0.01). Need for care variables exhibited positive effects on demand (p < 0.01). Existence of chronic disease was associated with 0.63 more visits, disability status was associated with 1.05 more visits, and people with poor health status had 4.24 more visits than those with excellent health status. ^ Conclusions. The average probability of visiting doctors in the past twelve months was 85% and the average number of visits was 3.45. The study emphasized the importance of need variables in explaining healthcare utilization, as well as the impact of insurance, employment and education on demand. The two-equation model of decision-making, and the probit and negative binomial regression methods, was a useful approach to demand estimation for primary care in urban settings.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hierarchical linear growth model (HLGM), as a flexible and powerful analytic method, has played an increased important role in psychology, public health and medical sciences in recent decades. Mostly, researchers who conduct HLGM are interested in the treatment effect on individual trajectories, which can be indicated by the cross-level interaction effects. However, the statistical hypothesis test for the effect of cross-level interaction in HLGM only show us whether there is a significant group difference in the average rate of change, rate of acceleration or higher polynomial effect; it fails to convey information about the magnitude of the difference between the group trajectories at specific time point. Thus, reporting and interpreting effect sizes have been increased emphases in HLGM in recent years, due to the limitations and increased criticisms for statistical hypothesis testing. However, most researchers fail to report these model-implied effect sizes for group trajectories comparison and their corresponding confidence intervals in HLGM analysis, since lack of appropriate and standard functions to estimate effect sizes associated with the model-implied difference between grouping trajectories in HLGM, and also lack of computing packages in the popular statistical software to automatically calculate them. ^ The present project is the first to establish the appropriate computing functions to assess the standard difference between grouping trajectories in HLGM. We proposed the two functions to estimate effect sizes on model-based grouping trajectories difference at specific time, we also suggested the robust effect sizes to reduce the bias of estimated effect sizes. Then, we applied the proposed functions to estimate the population effect sizes (d ) and robust effect sizes (du) on the cross-level interaction in HLGM by using the three simulated datasets, and also we compared the three methods of constructing confidence intervals around d and du recommended the best one for application. At the end, we constructed 95% confidence intervals with the suitable method for the effect sizes what we obtained with the three simulated datasets. ^ The effect sizes between grouping trajectories for the three simulated longitudinal datasets indicated that even though the statistical hypothesis test shows no significant difference between grouping trajectories, effect sizes between these grouping trajectories can still be large at some time points. Therefore, effect sizes between grouping trajectories in HLGM analysis provide us additional and meaningful information to assess group effect on individual trajectories. In addition, we also compared the three methods to construct 95% confident intervals around corresponding effect sizes in this project, which handled with the uncertainty of effect sizes to population parameter. We suggested the noncentral t-distribution based method when the assumptions held, and the bootstrap bias-corrected and accelerated method when the assumptions are not met.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A portable Fourier transform spectrometer (FTS), model EM27/SUN, was deployed onboard the research vessel Polarstern to measure the column-average dry air mole fractions of carbon dioxide (XCO2) and methane (XCH4) by means of direct sunlight absorption spectrometry. We report on technical developments as well as data calibration and reduction measures required to achieve the targeted accuracy of fractions of a percent in retrieved XCO2 and XCH4 while operating the instrument under field conditions onboard the moving platform during a 6-week cruise on the Atlantic from Cape Town (South Africa, 34° S, 18° E; 5 March 2014) to Bremerhaven (Germany, 54° N, 19° E; 14 April 2014). We demonstrate that our solar tracker typically achieved a tracking precision of better than 0.05° toward the center of the sun throughout the ship cruise which facilitates accurate XCO2 and XCH4 retrievals even under harsh ambient wind conditions. We define several quality filters that screen spectra, e.g., when the field of view was partially obstructed by ship structures or when the lines-of-sight crossed the ship exhaust plume. The measurements in clean oceanic air, can be used to characterize a spurious air-mass dependency. After the campaign, deployment of the spectrometer alongside the TCCON (Total Carbon Column Observing Network) instrument at Karlsruhe, Germany, allowed for determining a calibration factor that makes the entire campaign record traceable to World Meteorological Organization (WMO) standards. Comparisons to observations of the GOSAT satellite and concentration fields modeled by the European Centre for Medium-Range Weather Forecasts (ECMWF) Copernicus Atmosphere Monitoring Service (CAMS) demonstrate that the observational setup is well suited to provide validation opportunities above the ocean and along interhemispheric transects.