852 resultados para Initial data problem
Resumo:
In this paper, we consider a real-life heterogeneous fleet vehicle routing problem with time windows and split deliveries that occurs in a major Brazilian retail group. A single depot attends 519 stores of the group distributed in 11 Brazilian states. To find good solutions to this problem, we propose heuristics as initial solutions and a scatter search (SS) approach. Next, the produced solutions are compared with the routes actually covered by the company. Our results show that the total distribution cost can be reduced significantly when such methods are used. Experimental testing with benchmark instances is used to assess the merit of our proposed procedure. (C) 2008 Published by Elsevier B.V.
Resumo:
Estimation of Taylor`s power law for species abundance data may be performed by linear regression of the log empirical variances on the log means, but this method suffers from a problem of bias for sparse data. We show that the bias may be reduced by using a bias-corrected Pearson estimating function. Furthermore, we investigate a more general regression model allowing for site-specific covariates. This method may be efficiently implemented using a Newton scoring algorithm, with standard errors calculated from the inverse Godambe information matrix. The method is applied to a set of biomass data for benthic macrofauna from two Danish estuaries. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Hepatocellular carcinoma (HCC) ranks in prevalence and mortality among top 10 cancers worldwide. Butyric acid (BA), a member of histone deacetylase inhibitors (HDACi) has been proposed as an anticareinogenic agent. However, its short half-life is a therapeutical limitation. This problem could be circumvented with tributyrin (TB), a proposed BA prodrug. To investigate TB effectiveness for chemoprevention, rats were treated with the compound during initial phases of ""resistant hepatocyte"" model of hepatocarcinogenesis, and cellular and molecular parameters were evaluated. TB inhibited (p < 0.05) development of hepatic preneoplastic lesions (PNL) including persistent ones considered HCC progression sites. TB increased (p < 0.05) PNL remodeling, a process whereby they tend to disappear. TB did not inhibit cell proliferation in PNL, but induced (p < 0.05) apoptosis in remodeling ones. Compared to controls, rats treated with TB presented increased (P < 0.05) hepatic levels of BA indicating its effectiveness as a prodrug. Molecular mechanisms of TB-induced hepatocarcinogenesis chemoprevention were investigated. TB increased (p < 0.05) hepatic nuclear histone H3K9 hyperacetylation specifically in PNL and p21 protein expression, which could be associated with inhibitory HDAC effects. Moreover, it reduced (p < 0.05) the frequency of persistent PNL with aberrant cytoplasmic p53 accumulation, an alteration associated with increased malignancy. Original data observed in our study support the effectiveness of TB as a prodrug of BA and as an HDACi in hepatocarcinogenesis chemoprevention. Besides histone acetylation and p21 restored expression, molecular mechanisms involved with TB anticarcinogenic actions could also be related to modulation of p53 pathways. (C) 2008 Wiley-Liss, Inc.
Resumo:
We investigate here a modification of the discrete random pore model [Bhatia SK, Vartak BJ, Carbon 1996;34:1383], by including an additional rate constant which takes into account the different reactivity of the initial pore surface having attached functional groups and hydrogens, relative to the subsequently exposed surface. It is observed that the relative initial reactivity has a significant effect on the conversion and structural evolution, underscoring the importance of initial surface chemistry. The model is tested against experimental data on chemically controlled char oxidation and steam gasification at various temperatures. It is seen that the variations of the reaction rate and surface area with conversion are better represented by the present approach than earlier random pore models. The results clearly indicate the improvement of model predictions in the low conversion region, where the effect of the initially attached functional groups and hydrogens is more significant, particularly for char oxidation. It is also seen that, for the data examined, the initial surface chemistry is less important for steam gasification as compared to the oxidation reaction. Further development of the approach must also incorporate the dynamics of surface complexation, which is not considered here.
Resumo:
In the context of cancer diagnosis and treatment, we consider the problem of constructing an accurate prediction rule on the basis of a relatively small number of tumor tissue samples of known type containing the expression data on very many (possibly thousands) genes. Recently, results have been presented in the literature suggesting that it is possible to construct a prediction rule from only a few genes such that it has a negligible prediction error rate. However, in these results the test error or the leave-one-out cross-validated error is calculated without allowance for the selection bias. There is no allowance because the rule is either tested on tissue samples that were used in the first instance to select the genes being used in the rule or because the cross-validation of the rule is not external to the selection process; that is, gene selection is not performed in training the rule at each stage of the cross-validation process. We describe how in practice the selection bias can be assessed and corrected for by either performing a cross-validation or applying the bootstrap external to the selection process. We recommend using 10-fold rather than leave-one-out cross-validation, and concerning the bootstrap, we suggest using the so-called. 632+ bootstrap error estimate designed to handle overfitted prediction rules. Using two published data sets, we demonstrate that when correction is made for the selection bias, the cross-validated error is no longer zero for a subset of only a few genes.
Resumo:
Background: This study used household survey data on the prevalence of child, parent and family variables to establish potential targets for a population-level intervention to strengthen parenting skills in the community. The goals of the intervention include decreasing child conduct problems, increasing parental self-efficacy, use of positive parenting strategies, decreasing coercive parenting and increasing help-seeking, social support and participation in positive parenting programmes. Methods: A total of 4010 parents with a child under the age of 12 years completed a statewide telephone survey on parenting. Results: One in three parents reported that their child had a behavioural or emotional problem in the previous 6 months. Furthermore, 9% of children aged 2–12 years meet criteria for oppositional defiant disorder. Parents who reported their child's behaviour to be difficult were more likely to perceive parenting as a negative experience (i.e. demanding, stressful and depressing). Parents with greatest difficulties were mothers without partners and who had low levels of confidence in their parenting roles. About 20% of parents reported being stressed and 5% reported being depressed in the 2 weeks prior to the survey. Parents with personal adjustment problems had lower levels of parenting confidence and their child was more difficult to manage. Only one in four parents had participated in a parent education programme. Conclusions: Implications for the setting of population-level goals and targets for strengthening parenting skills are discussed.
Resumo:
Reviews the ecological status of the mahogany glider and describes its distribution, habitat and abundance, life history and threats to it. Three serial surveys of Brisbane residents provide data on the knowledge of respondents about the mahogany glider. The results provide information about the attitudes of respondents to the mahogany glider, to its conservation and relevant public policies and about variations in these factors as the knowledge of participants of the mahogany glider alters. Similarly data is provided and analysed about the willingness to pay of respondents to conserve the mahogany glider. Population viability analysis is applied to estimate the required habitat area for a minimum viable population of the mahogany glider to ensure at least a 95% probability of its survival for 100 years. Places are identified in Queensland where the requisite minimum area of critical habitat can be conserved. Using the survey results as a basis, the likely willingness of groups of Australians to pay for the conservation of the mahogany glider is estimated and consequently their willingness to pay for the minimum required area of its habitat. Methods for estimating the cost of protecting this habitat are outlined. Australia-wide benefits seem to exceed the costs. Establishing a national park containing the minimum viable population of the mahogany glider is an appealing management option. This would also be beneficial in conserving other endangered wildlife species. Therefore, additional economic benefits to those estimated on account of the mahogany glider itself can be obtained.
Resumo:
The aim of the study was to test a revised conceptualization of the role of coping in adjustment to a low-control stressor-women's adjustment to a failed in vitro fertilization (NF) attempt. Data were collected prior to the IVF treatment (Time I) and twice after the failed NF attempt (1 to 2 weeks after finding out the results, n = 171, and fi weeks later, n = 139). Initial adjustment was assessed at Time I, whereas measures of coping and both self-report and partner ratings of adjustment were obtained at Times ? and 3. As predicted, escapist strategies and problem-management strategies (mainly at Time ?) were associated with poor adjustment, whereas problem-appraisal coping was associated with better adjustment., There was also support for the proposed positive relationship between adjustment and emotional approach coping (on self-report adjustment).
Resumo:
The cost of spatial join processing can be very high because of the large sizes of spatial objects and the computation-intensive spatial operations. While parallel processing seems a natural solution to this problem, it is not clear how spatial data can be partitioned for this purpose. Various spatial data partitioning methods are examined in this paper. A framework combining the data-partitioning techniques used by most parallel join algorithms in relational databases and the filter-and-refine strategy for spatial operation processing is proposed for parallel spatial join processing. Object duplication caused by multi-assignment in spatial data partitioning can result in extra CPU cost as well as extra communication cost. We find that the key to overcome this problem is to preserve spatial locality in task decomposition. We show in this paper that a near-optimal speedup can be achieved for parallel spatial join processing using our new algorithms.
Resumo:
Background. Increased life expectancy in men during the last thirty years is largely due to the decrease in mortality from cardiovascular disease in the age group 29-69 yr. This change has resulted in a change in the disease profile of the population with conditions such as aneurysm of the abdominal aorta (AAA) becoming more prevalent. The advent of endoluminal treatment for AAA has encouraged prophylactic intervention and fuelled the argument to screen for the disease. The feasibility of inserting an endoluminal graft is dependent on the morphology and growth characteristics of the aneurysm. This study used data from a randomized controlled trial of ultrasound screening for AAA in men aged 65-83 yr in Western Australia for the purpose of determining the norms of the living anatomy in the pressurized infrarenal aorta. Aims. To examine (1) the diameters of the infra-renal aorta in aneurysmal and non-aneurysmal cases, (2) the implications for treatment modalities, with particular reference to endoluminal grafting, which is most dependent on normal and aneurysmal morphology, and (3) any evidence to support the notion that northern Europeans are predisposed to aneurysmal disease. Methods. Using ultrasound, a randomized control trial was established in Western Australia to assess the value of a screening program in males aged 65-83 yr, The infra-renal aorta was defined as aneurysmal if the maximum diameter was 30 mm or more. Aortic diameter was modelled both as a continuous tin mm) and as a binary outcome variable, for those men who had an infra-renal diameter of 30 mm or more. ANOVA and linear regression were used for modelling aortic diameter as a continuum, while chi-square analysis and logistic regression were used in comparing men with and without the diagnosis of AAA. Findings. By December 1998, of 19.583 men had been invited to undergo ultrasound screening for AAA, 12.203 accepted the invitation (corrected response fraction 70.8%). The prevalence of AAA increased with age from 4.8% at 65 yr to 10.8% at 80 yr (chi (2) = 77.9, df = 3, P<0.001). The median (IQR) diameter for the non-aneurysmal group was 21.4 mm (3.3 mm) and there was an increase (<chi>(2) = 76.0, df = 1, P<0.001) in the diameter of the infra-renal aorta with age. Since 27 mm is the 95th centile for the non-aneurysmal infra-renal aorta, a diameter of 30 mm or more is justified as defining an aneurysm. The risk of AAA was higher in men of Australian (OR = 1.0) and northern European origin (OR = 1.0, 95%CL: 0.9. 1.2) compared with those of Mediterranean origin (OR = 0.5, 99%CL: 0.4, 0.7). Conclusion. Although screening has not yet been shown to reduce mortality from AAA. these population-based data assist the understanding of aneurysmal disease and the further development and use of endoluminal grafts for this condition. (C) 2001 Published by Elsevier Science Ltd on behalf of The International Society for Cardiovascular Surgery.
Resumo:
We present a method of estimating HIV incidence rates in epidemic situations from data on age-specific prevalence and changes in the overall prevalence over time. The method is applied to women attending antenatal clinics in Hlabisa, a rural district of KwaZulu/Natal, South Africa, where transmission of HIV is overwhelmingly through heterosexual contact. A model which gives age-specific prevalence rates in the presence of a progressing epidemic is fitted to prevalence data for 1998 using maximum likelihood methods and used to derive the age-specific incidence. Error estimates are obtained using a Monte Carlo procedure. Although the method is quite general some simplifying assumptions are made concerning the form of the risk function and sensitivity analyses are performed to explore the importance of these assumptions. The analysis shows that in 1998 the annual incidence of infection per susceptible woman increased from 5.4 per cent (3.3-8.5 per cent; here and elsewhere ranges give 95 per cent confidence limits) at age 15 years to 24.5 per cent (20.6-29.1 per cent) at age 22 years and declined to 1.3 per cent (0.5-2.9 per cent) at age 50 years; standardized to a uniform age distribution, the overall incidence per susceptible woman aged 15 to 59 was 11.4 per cent (10.0-13.1 per cent); per women in the population it was 8.4 per cent (7.3-9.5 per cent). Standardized to the age distribution of the female population the average incidence per woman was 9.6 per cent (8.4-11.0 per cent); standardized to the age distribution of women attending antenatal clinics, it was 11.3 per cent (9.8-13.3 per cent). The estimated incidence depends on the values used for the epidemic growth rate and the AIDS related mortality. To ensure that, for this population, errors in these two parameters change the age specific estimates of the annual incidence by less than the standard deviation of the estimates of the age specific incidence, the AIDS related mortality should be known to within +/-50 per cent and the epidemic growth rate to within +/-25 per cent, both of which conditions are met. In the absence of cohort studies to measure the incidence of HIV infection directly, useful estimates of the age-specific incidence can be obtained from cross-sectional, age-specific prevalence data and repeat cross-sectional data on the overall prevalence of HIV infection. Several assumptions were made because of the lack of data but sensitivity analyses show that they are unlikely to affect the overall estimates significantly. These estimates are important in assessing the magnitude of the public health problem, for designing vaccine trials and for evaluating the impact of interventions. Copyright (C) 2001 John Wiley & Sons, Ltd.
Resumo:
Surge flow phenomena. e.g.. as a consequence of a dam failure or a flash flood, represent free boundary problems. ne extending computational domain together with the discontinuities involved renders their numerical solution a cumbersome procedure. This contribution proposes an analytical solution to the problem, It is based on the slightly modified zero-inertia (ZI) differential equations for nonprismatic channels and uses exclusively physical parameters. Employing the concept of a momentum-representative cross section of the moving water body together with a specific relationship for describing the cross sectional geometry leads, after considerable mathematical calculus. to the analytical solution. The hydrodynamic analytical model is free of numerical troubles, easy to run, computationally efficient. and fully satisfies the law of volume conservation. In a first test series, the hydrodynamic analytical ZI model compares very favorably with a full hydrodynamic numerical model in respect to published results of surge flow simulations in different types of prismatic channels. In order to extend these considerations to natural rivers, the accuracy of the analytical model in describing an irregular cross section is investigated and tested successfully. A sensitivity and error analysis reveals the important impact of the hydraulic radius on the velocity of the surge, and this underlines the importance of an adequate description of the topography, The new approach is finally applied to simulate a surge propagating down the irregularly shaped Isar Valley in the Bavarian Alps after a hypothetical dam failure. The straightforward and fully stable computation of the flood hydrograph along the Isar Valley clearly reflects the impact of the strongly varying topographic characteristics on the How phenomenon. Apart from treating surge flow phenomena as a whole, the analytical solution also offers a rigorous alternative to both (a) the approximate Whitham solution, for generating initial values, and (b) the rough volume balance techniques used to model the wave tip in numerical surge flow computations.
Resumo:
The image reconstruction using the EIT (Electrical Impedance Tomography) technique is a nonlinear and ill-posed inverse problem which demands a powerful direct or iterative method. A typical approach for solving the problem is to minimize an error functional using an iterative method. In this case, an initial solution close enough to the global minimum is mandatory to ensure the convergence to the correct minimum in an appropriate time interval. The aim of this paper is to present a new, simple and low cost technique (quadrant-searching) to reduce the search space and consequently to obtain an initial solution of the inverse problem of EIT. This technique calculates the error functional for four different contrast distributions placing a large prospective inclusion in the four quadrants of the domain. Comparing the four values of the error functional it is possible to get conclusions about the internal electric contrast. For this purpose, initially we performed tests to assess the accuracy of the BEM (Boundary Element Method) when applied to the direct problem of the EIT and to verify the behavior of error functional surface in the search space. Finally, numerical tests have been performed to verify the new technique.
Resumo:
In studies assessing the trends in coronary events, such as the World Health Organization (WHO) MONICA Project (multinational MONItoring of trends and determinants of CArdiovascular disease), the main emphasis has been on coronary deaths and non-fatal definite myocardial infarctions (MI). It is, however, possible that the proportion of milder MIs may be increasing because of improvements in treatment and reductions in levels of risk factors. We used the MI register data of the WHO MONICA Project to investigate several definitions for mild non-fatal MIs that would be applicable in various settings and could be used to assess trends in milder coronary events. Of 38 populations participating in the WHO MONICA MI register study, more than half registered a sufficiently wide spectrum of events that it was possible to identify subsets of milder cases. The event rates and case fatality rates of MI are clearly dependent on the spectrum of non-fatal MIs, which are included. On clinical grounds we propose that the original MONICA category ''non-fatal possible MI'' could bt:divided into two groups: ''non fatal probable MI'' and ''prolonged chest pain.'' Non-fatal probable MIs are cases, which in addition to ''typical symptoms'' have electrocardiogram (EGG) or enzyme changes suggesting cardiac ischemia, but not severe enough to fulfil the criteria for non-fatal definite MI In more than half of the MONICA Collaborating Centers, the registration of MI covers these milder events reasonably well. Proportions of non-fatal probable MIs vary less between populations than do proportions of non fatal possible MIs. Also rates of non-fatal probable MI are somewhat more highly correlated with rates of fatal events and non-fatal definite MI. These findings support the validity of the category of non-fatal probable MI. In each center the increase in event rates and the decrease in case-fatality due to the inclusion of non-fatal probable MI was lar er for women than men. For the WHO MONICA Project and other epidemiological studies the proposed category of non-fatal probable MIs can be used for assessing trends in rates of milder MI. Copyright (C) 1997 Elsevier Science Inc.
Resumo:
Background. Several medical devices used during hemodynamic procedures, particularly angiographic diagnostic and therapeutic cardiac catheters, are manufactured for single use only. However, reprocessing and reuse of these devices has been reported, to determine the frequency of reuse and reprocessing of single-use medical devices used during hemodynamic procedures in Brazil and to evaluate how reprocessing is performed. Design. National survey, conducted from December 1999 to July 2001. Methods. Most of the institutions affiliated with the Brazilian Society of Hemodynamic and Interventional Cardiology were surveyed by use of a questionnaire sent in the mail. Results. The questionnaire response rate was 50% (119 of 240 institutions). Of the 119 institutions that responded, 116 (97%) reported reuse of single-use devices used during hemodynamic procedures, and only 26 (22%) reported use of a standardized reprocessing protocol. Cleaning, flushing, rinsing, drying, sterilizing and packaging methods varied greatly and were mostly inadequate. Criteria for discarding reused devices varied widely. Of the 119 institutions that responded, 80 (67%) reported having a surveillance system for adverse events associated with the reuse of medical devices, although most of these institutions did not routinely review the data, and only 38 (32%) described a training program for the personnel who reprocessed single-use devices. Conclusions. The reuse of single-use devices used during hemodynamic procedures was very frequent in hospitals in Brazil. Basic guidance on how to reuse and reprocess single-use medical devices is urgently needed, because, despite the lack of studies to support reusing and reprocessing single-use medical devices, such devices are necessary in limited-resource areas in which these practices are current.