852 resultados para Initial data problem
Resumo:
Java Enterprise Applications (JEAs) are complex software systems written using multiple technologies. Moreover they are usually distributed systems and use a database to deal with persistence. A particular problem that appears in the design of these systems is the lack of a rich business model. In this paper we propose a technique to support the recovery of such rich business objects starting from anemic Data Transfer Objects (DTOs). Exposing the code duplications in the application's elements using the DTOs we suggest which business logic can be moved into the DTOs from the other classes.
Resumo:
BACKGROUND CONTEXT: The Neck Disability Index frequently is used to measure outcomes of the neck. The statistical rigor of the Neck Disability Index has been assessed with conflicting outcomes. To date, Confirmatory Factor Analysis of the Neck Disability Index has not been reported for a suitably large population study. Because the Neck Disability Index is not a condition-specific measure of neck function, initial Confirmatory Factor Analysis should consider problematic neck patients as a homogenous group. PURPOSE: We sought to analyze the factor structure of the Neck Disability Index through Confirmatory Factor Analysis in a symptomatic, homogeneous, neck population, with respect to pooled populations and gender subgroups. STUDY DESIGN: This was a secondary analysis of pooled data. PATIENT SAMPLE: A total of 1,278 symptomatic neck patients (67.5% female, median age 41 years), 803 nonspecific and 475 with whiplash-associated disorder. OUTCOME MEASURES: The Neck Disability Index was used to measure outcomes. METHODS: We analyzed pooled baseline data from six independent studies of patients with neck problems who completed Neck Disability Index questionnaires at baseline. The Confirmatory Factor Analysis was considered in three scenarios: the full sample and separate sexes. Models were compared empirically for best fit. RESULTS: Two-factor models have good psychometric properties across both the pooled and sex subgroups. However, according to these analyses, the one-factor solution is preferable from both a statistical perspective and parsimony. The two-factor model was close to significant for the male subgroup (p<.07) where questions separated into constructs of mental function (pain, reading headaches and concentration) and physical function (personal care, lifting, work, driving, sleep, and recreation). CONCLUSIONS: The Neck Disability Index demonstrated a one-factor structure when analyzed by Confirmatory Factor Analysis in a pooled, homogenous sample of neck problem patients. However, a two-factor model did approach significance for male subjects where questions separated into constructs of mental and physical function. Further investigations in different conditions, subgroup and sex-specific populations are warranted.
Resumo:
Objectives: Previous research conducted in the late 1980s suggested that vehicle impacts following an initial barrier collision increase severe occupant injury risk. Now over 25years old, the data are no longer representative of the currently installed barriers or the present US vehicle fleet. The purpose of this study is to provide a present-day assessment of secondary collisions and to determine if current full-scale barrier crash testing criteria provide an indication of secondary collision risk for real-world barrier crashes. Methods: To characterize secondary collisions, 1,363 (596,331 weighted) real-world barrier midsection impacts selected from 13years (1997-2009) of in-depth crash data available through the National Automotive Sampling System (NASS) / Crashworthiness Data System (CDS) were analyzed. Scene diagram and available scene photographs were used to determine roadside and barrier specific variables unavailable in NASS/CDS. Binary logistic regression models were developed for second event occurrence and resulting driver injury. To investigate current secondary collision crash test criteria, 24 full-scale crash test reports were obtained for common non-proprietary US barriers, and the risk of secondary collisions was determined using recommended evaluation criteria from National Cooperative Highway Research Program (NCHRP) Report 350. Results: Secondary collisions were found to occur in approximately two thirds of crashes where a barrier is the first object struck. Barrier lateral stiffness, post-impact vehicle trajectory, vehicle type, and pre-impact tracking conditions were found to be statistically significant contributors to secondary event occurrence. The presence of a second event was found to increase the likelihood of a serious driver injury by a factor of 7 compared to cases with no second event present. The NCHRP Report 350 exit angle criterion was found to underestimate the risk of secondary collisions in real-world barrier crashes. Conclusions: Consistent with previous research, collisions following a barrier impact are not an infrequent event and substantially increase driver injury risk. The results suggest that using exit-angle based crash test criteria alone to assess secondary collision risk is not sufficient to predict second collision occurrence for real-world barrier crashes.
Resumo:
Despite numerous studies about nitrogen-cycling in forest ecosystems, many uncertainties remain, especially regarding the longer-term nitrogen accumulation. To contribute to filling this gap, the dynamic process-based model TRACE, with the ability to simulate 15N tracer redistribution in forest ecosystems was used to study N cycling processes in a mountain spruce forest of the northern edge of the Alps in Switzerland (Alptal, SZ). Most modeling analyses of N-cycling and C-N interactions have very limited ability to determine whether the process interactions are captured correctly. Because the interactions in such a system are complex, it is possible to get the whole-system C and N cycling right in a model without really knowing if the way the model combines fine-scale interactions to derive whole-system cycling is correct. With the possibility to simulate 15N tracer redistribution in ecosystem compartments, TRACE features a very powerful tool for the validation of fine-scale processes captured by the model. We first adapted the model to the new site (Alptal, Switzerland; long-term low-dose N-amendment experiment) by including a new algorithm for preferential water flow and by parameterizing of differences in drivers such as climate, N deposition and initial site conditions. After the calibration of key rates such as NPP and SOM turnover, we simulated patterns of 15N redistribution to compare against 15N field observations from a large-scale labeling experiment. The comparison of 15N field data with the modeled redistribution of the tracer in the soil horizons and vegetation compartments shows that the majority of fine-scale processes are captured satisfactorily. Particularly, the model is able to reproduce the fact that the largest part of the N deposition is immobilized in the soil. The discrepancies of 15N recovery in the LF and M soil horizon can be explained by the application method of the tracer and by the retention of the applied tracer by the well developed moss layer, which is not considered in the model. Discrepancies in the dynamics of foliage and litterfall 15N recovery were also observed and are related to the longevity of the needles in our mountain forest. As a next step, we will use the final Alptal version of the model to calculate the effects of climate change (temperature, CO2) and N deposition on ecosystem C sequestration in this regionally representative Norway spruce (Picea abies) stand.
Resumo:
BACKGROUND: Detecting a benefit from closure of patent foramen ovale in patients with cryptogenic stroke is hampered by low rates of stroke recurrence and uncertainty about the causal role of patent foramen ovale in the index event. A method to predict patent foramen ovale-attributable recurrence risk is needed. However, individual databases generally have too few stroke recurrences to support risk modeling. Prior studies of this population have been limited by low statistical power for examining factors related to recurrence. AIMS: The aim of this study was to develop a database to support modeling of patent foramen ovale-attributable recurrence risk by combining extant data sets. METHODS: We identified investigators with extant databases including subjects with cryptogenic stroke investigated for patent foramen ovale, determined the availability and characteristics of data in each database, collaboratively specified the variables to be included in the Risk of Paradoxical Embolism database, harmonized the variables across databases, and collected new primary data when necessary and feasible. RESULTS: The Risk of Paradoxical Embolism database has individual clinical, radiologic, and echocardiographic data from 12 component databases, including subjects with cryptogenic stroke both with (n = 1925) and without (n = 1749) patent foramen ovale. In the patent foramen ovale subjects, a total of 381 outcomes (stroke, transient ischemic attack, death) occurred (median follow-up 2·2 years). While there were substantial variations in data collection between studies, there was sufficient overlap to define a common set of variables suitable for risk modeling. CONCLUSION: While individual studies are inadequate for modeling patent foramen ovale-attributable recurrence risk, collaboration between investigators has yielded a database with sufficient power to identify those patients at highest risk for a patent foramen ovale-related stroke recurrence who may have the greatest potential benefit from patent foramen ovale closure.
Resumo:
Dental erosion is the non-carious dental substance loss induced by direct impact of exogenous or endogenous acids. It results in a loss of dental hard tissue, which can be serious in some groups, such as those with eating disorders, in patients with gastroesophageal reflux disease, and also in persons consuming high amounts of acidic drinks and foodstuffs. For these persons, erosion can impair their well-being, due to changes in appearance and/or loss of function of the teeth, e.g., the occurrence of hypersensitivity of teeth if the dentin is exposed. If erosion reaches an advanced stage, time- and money-consuming therapies may be necessary. The therapy, in turn, poses a challenge for the dentist, particularly if the defects are diagnosed at an advanced stage. While initial and moderate defects can mostly be treated non- or minimally invasively, severe defects often require complex therapeutic strategies, which often entail extensive loss of dental hard tissue due to preparatory measures. A major goal should therefore be to diagnose dental erosion at an early stage, to avoid functional and esthetic impairments as well as pain sensations and to ensure longevity of the dentition.
Resumo:
The aim of this study was to evaluate the survival and success rates of immediately restored implants with sandblasted, large-grit, acid-etched (SLA) surfaces over a period of 5 years. Twenty patients (mean age, 47.3 years) received a total of 21 SLA wide-neck implants in healed mandibular first molar sites after initial periodontal treatment. To be included in the study, the implants had to demonstrate primary stability with an insertion torque value of 35 Ncm. A provisional restoration was fabricated chairside and placed on the day of surgery. Definitive cemented restorations were inserted 8 weeks after surgery. Community Periodontal Index of Treatment Needs (CPITN) indices and the radiographic distance between the implant shoulder and the first visible bone-implant contact (DIB) were measured and compared over the study period. The initial mean CPITN was 3.24, and decreased over the study period to 1.43. At the postoperative radiographic examination, the mean DIB was 1.41 mm for the 21 implants, indicating that part of the machined neck of the implants was placed slightly below the osseous crest. The mean DIB value increased to 1.99 mm at the 5-year examination. This increase proved to be statistically significant (P < .0001). Between the baseline and 5-year examinations, the mean bone crest level loss was 0.58 mm. Success and survival rates of the 21 implants after 5 years of function were 100%. This 5-year study confirms that immediate restoration of mandibular molar wide-neck implants with good primary stability, as noted by insertion torque values of at least 35 Ncm, is a safe and predictable procedure.
Resumo:
The present study analyzed history of smoking and willingness to quit smoking in patients referred for diagnosis and treatment of different oral mucosal lesions. Prior to the initial clinical examination, patients filled in a standardized questionnaire regarding their current and former smoking habits and willingness to quit. Definitive diagnoses were classified into three groups (benign/reactive lesions, premalignant lesions and conditions, and malignant diseases) and correlated with the self-reported data in the questionnaires. Of the 980 patients included, 514 (52%) described themselves as never smokers, 202 (21%) as former smokers, and 264 (27%) as current smokers. In the group of current smokers, 23% thought their premalignant lesions/conditions were related to their smoking habit, but only 15% of the patients with malignant mucosal diseases saw that correlation. Only 14% of the smokers wanted to commence smoking cessation within the next 30 days. Patients with malignant diseases (31%) showed greater willingness to quit than patients diagnosed with benign/reactive lesions (11%). Future clinical studies should attempt (1) to enhance patients' awareness of the negative impact of smoking on the oral mucosa and (2) to increase willingness to quit in smokers referred to a dental/oral medicine setting.
Resumo:
BACKGROUND Current guidelines give recommendations for preferred combination antiretroviral therapy (cART). We investigated factors influencing the choice of initial cART in clinical practice and its outcome. METHODS We analyzed treatment-naive adults with human immunodeficiency virus (HIV) infection participating in the Swiss HIV Cohort Study and starting cART from January 1, 2005, through December 31, 2009. The primary end point was the choice of the initial antiretroviral regimen. Secondary end points were virologic suppression, the increase in CD4 cell counts from baseline, and treatment modification within 12 months after starting treatment. RESULTS A total of 1957 patients were analyzed. Tenofovir-emtricitabine (TDF-FTC)-efavirenz was the most frequently prescribed cART (29.9%), followed by TDF-FTC-lopinavir/r (16.9%), TDF-FTC-atazanavir/r (12.9%), zidovudine-lamivudine (ZDV-3TC)-lopinavir/r (12.8%), and abacavir/lamivudine (ABC-3TC)-efavirenz (5.7%). Differences in prescription were noted among different Swiss HIV Cohort Study sites (P < .001). In multivariate analysis, compared with TDF-FTC-efavirenz, starting TDF-FTC-lopinavir/r was associated with prior AIDS (relative risk ratio, 2.78; 95% CI, 1.78-4.35), HIV-RNA greater than 100 000 copies/mL (1.53; 1.07-2.18), and CD4 greater than 350 cells/μL (1.67; 1.04-2.70); TDF-FTC-atazanavir/r with a depressive disorder (1.77; 1.04-3.01), HIV-RNA greater than 100 000 copies/mL (1.54; 1.05-2.25), and an opiate substitution program (2.76; 1.09-7.00); and ZDV-3TC-lopinavir/r with female sex (3.89; 2.39-6.31) and CD4 cell counts greater than 350 cells/μL (4.50; 2.58-7.86). At 12 months, 1715 patients (87.6%) achieved viral load less than 50 copies/mL and CD4 cell counts increased by a median (interquartile range) of 173 (89-269) cells/μL. Virologic suppression was more likely with TDF-FTC-efavirenz, and CD4 increase was higher with ZDV-3TC-lopinavir/r. No differences in outcome were observed among Swiss HIV Cohort Study sites. CONCLUSIONS Large differences in prescription but not in outcome were observed among study sites. A trend toward individualized cART was noted suggesting that initial cART is significantly influenced by physician's preference and patient characteristics. Our study highlights the need for evidence-based data for determining the best initial regimen for different HIV-infected persons.
Resumo:
Since 1990, the issue of homelessness has become increasingly important in Hungary as a result of economic and structural changes. Various suggestions as to how the problem may be solved have always been preceded by the question "How many homeless people are there?" and there is still no official consensus as to the answer. Counting of the homeless is particularly difficult because of the bias in the initial sampling frame due to two factors that characterise this population: the definition of homelessness, and its 'hidden' nature. David aimed to estimate the size of the homeless population of Budapest by using two non-standard sampling methods: snowball sampling and the capture-recapture method. Her calculations are based on three data sets: one snowball data set and two independent list data sets. These estimators, supported by other statistical data, suggest that in 1999 there were about 8000-10000 homeless people in Budapest.
Resumo:
The angiotensin II receptor blockers irbesartan and losartan effectively reduce blood pressure and proteinuria in childhood. We were impressed by the neutral taste and the small size of the candesartan cilexetil tablets. This angiotensin II receptor blocker was used during 4 months in 17 pediatric patients (aged 0.5-16, median 4.5 years) with chronic arterial hypertension (n=6), overt proteinuria (n=2), or both (n=9). The initial candesartan dose of 0.23 (0.16-0.28) mg/kg body weight once daily (median and interquartile ranged) was doubled in ten patients [final dose 0.35 (0.22-0.47) mg/kg body weight]. No adverse clinical experiences were noted on candesartan. Candesartan increased plasma potassium by 0.3 (0.0-0.8) mmol/l (P<0.01). In children with arterial hypertension, blood pressure decreased by 9 (3-13)/9 (3-18) mmHg (P<0.01); in those with overt proteinuria the urinary albumin/creatinine ratio decreased by 279 (33-652) mg/mmol (P<0.05). In conclusion, in children candesartan reduces blood pressure and proteinuria with an excellent short-term tolerability profile.
Resumo:
OBJECTIVE: Patients in the stomatology service of the Department of Oral Surgery and Stomatology who were clinically and histopathologically diagnosed with oral lichen planus (OLP) in the years 1995 to 2001 were examined for a possible malignant transformation of a previously biopsied OLP site. METHOD AND MATERIALS: For the 145 patients included, the recordings were searched for initial localization and type of OLP lesion, potential noxious agents, distribution between symptomatic and asymptomatic OLP types, and for a malignant transformation of a known OLP site during the follow-up period up to December 2003. RESULTS: The group comprised 47 men and 98 women with a mean age of 56.3 years. Of the 497 lesions, almost half were classified as reticular or papular, predominantly located on the buccal mucosa, gingiva, and borders of the tongue. Four patients did not adhere to their scheduled control visits and were dropped from the study. During the follow-up period 4 patients developed malignant transformation of OLP. In 3 of these cases, dysplasia was present at the initial diagnosis of OLP. This results in a malignant transformation rate of 2.84% among the remaining 141 patients; if the 3 patients with initial dysplasia are excluded, the rate drops to 0.71%. CONCLUSIONS: Until further knowledge is derived from large prospective studies, the data supporting or negating a potential malignant character of OLP lesions remains inconclusive. Special emphasis has to be directed toward unified inclusion and exclusion criteria regarding clinical and histologic findings and identifiable risk factors to allow the comparison of different studies.
Resumo:
With recent advances in mass spectrometry techniques, it is now possible to investigate proteins over a wide range of molecular weights in small biological specimens. This advance has generated data-analytic challenges in proteomics, similar to those created by microarray technologies in genetics, namely, discovery of "signature" protein profiles specific to each pathologic state (e.g., normal vs. cancer) or differential profiles between experimental conditions (e.g., treated by a drug of interest vs. untreated) from high-dimensional data. We propose a data analytic strategy for discovering protein biomarkers based on such high-dimensional mass-spectrometry data. A real biomarker-discovery project on prostate cancer is taken as a concrete example throughout the paper: the project aims to identify proteins in serum that distinguish cancer, benign hyperplasia, and normal states of prostate using the Surface Enhanced Laser Desorption/Ionization (SELDI) technology, a recently developed mass spectrometry technique. Our data analytic strategy takes properties of the SELDI mass-spectrometer into account: the SELDI output of a specimen contains about 48,000 (x, y) points where x is the protein mass divided by the number of charges introduced by ionization and y is the protein intensity of the corresponding mass per charge value, x, in that specimen. Given high coefficients of variation and other characteristics of protein intensity measures (y values), we reduce the measures of protein intensities to a set of binary variables that indicate peaks in the y-axis direction in the nearest neighborhoods of each mass per charge point in the x-axis direction. We then account for a shifting (measurement error) problem of the x-axis in SELDI output. After these pre-analysis processing of data, we combine the binary predictors to generate classification rules for cancer, benign hyperplasia, and normal states of prostate. Our approach is to apply the boosting algorithm to select binary predictors and construct a summary classifier. We empirically evaluate sensitivity and specificity of the resulting summary classifiers with a test dataset that is independent from the training dataset used to construct the summary classifiers. The proposed method performed nearly perfectly in distinguishing cancer and benign hyperplasia from normal. In the classification of cancer vs. benign hyperplasia, however, an appreciable proportion of the benign specimens were classified incorrectly as cancer. We discuss practical issues associated with our proposed approach to the analysis of SELDI output and its application in cancer biomarker discovery.
Resumo:
In biostatistical applications, interest often focuses on the estimation of the distribution of time T between two consecutive events. If the initial event time is observed and the subsequent event time is only known to be larger or smaller than an observed monitoring time, then the data is described by the well known singly-censored current status model, also known as interval censored data, case I. We extend this current status model by allowing the presence of a time-dependent process, which is partly observed and allowing C to depend on T through the observed part of this time-dependent process. Because of the high dimension of the covariate process, no globally efficient estimators exist with a good practical performance at moderate sample sizes. We follow the approach of Robins and Rotnitzky (1992) by modeling the censoring variable, given the time-variable and the covariate-process, i.e., the missingness process, under the restriction that it satisfied coarsening at random. We propose a generalization of the simple current status estimator of the distribution of T and of smooth functionals of the distribution of T, which is based on an estimate of the missingness. In this estimator the covariates enter only through the estimate of the missingness process. Due to the coarsening at random assumption, the estimator has the interesting property that if we estimate the missingness process more nonparametrically, then we improve its efficiency. We show that by local estimation of an optimal model or optimal function of the covariates for the missingness process, the generalized current status estimator for smooth functionals become locally efficient; meaning it is efficient if the right model or covariate is consistently estimated and it is consistent and asymptotically normal in general. Estimation of the optimal model requires estimation of the conditional distribution of T, given the covariates. Any (prior) knowledge of this conditional distribution can be used at this stage without any risk of losing root-n consistency. We also propose locally efficient one step estimators. Finally, we show some simulation results.
Resumo:
Estimation for bivariate right censored data is a problem that has had much study over the past 15 years. In this paper we propose a new class of estimators for the bivariate survival function based on locally efficient estimation. We introduce the locally efficient estimator for bivariate right censored data, present an asymptotic theorem, present the results of simulation studies and perform a brief data analysis illustrating the use of the locally efficient estimator.