14 resultados para Tree solution method
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
We study a real-world scheduling problem arising in the context of a rolling ingots production. First we review the production process and discuss peculiarities that have to be observed when scheduling a given set of production orders on the production facilities. We then show how to model this scheduling problem using prescribed time lags between operations, different kinds of resources, and sequence-dependent changeovers. A branch-and-bound solution procedure is presented in the second part. The basic principle is to relax the resource constraints by assuming infinite resource availability. Resulting resource conflicts are then stepwise resolved by introducing precedence relationships among operations competing for the same resources. The algorithm has been implemented as a beam search heuristic enumerating alternative sets of precedence relationships.
Resumo:
To improve our understanding of the Asian monsoon system, we developed a hydroclimate reconstruction in a marginal monsoon shoulder region for the period prior to the industrial era. Here, we present the first moisture sensitive tree-ring chronology, spanning 501 years for the Dieshan Mountain area, a boundary region of the Asian summer monsoon in the northeastern Tibetan Plateau. This reconstruction was derived from 101 cores of 68 old-growth Chinese pine (Pinus tabulaeformis) trees. We introduce a Hilbert–Huang Transform (HHT) based standardization method to develop the tree-ring chronology, which has the advantages of excluding non-climatic disturbances in individual tree-ring series. Based on the reliable portion of the chronology, we reconstructed the annual (prior July to current June) precipitation history since 1637 for the Dieshan Mountain area and were able to explain 41.3% of the variance. The extremely dry years in this reconstruction were also found in historical documents and are also associated with El Niño episodes. Dry periods were reconstructed for 1718–1725, 1766–1770 and 1920–1933, whereas 1782–1788 and 1979–1985 were wet periods. The spatial signatures of these events were supported by data from other marginal regions of the Asian summer monsoon. Over the past four centuries, out-of-phase relationships between hydroclimate variations in the Dieshan Mountain area and far western Mongolia were observed during the 1718–1725 and 1766–1770 dry periods and the 1979–1985 wet period.
Resumo:
We present a novel approach to the reconstruction of depth from light field data. Our method uses dictionary representations and group sparsity constraints to derive a convex formulation. Although our solution results in an increase of the problem dimensionality, we keep numerical complexity at bay by restricting the space of solutions and by exploiting an efficient Primal-Dual formulation. Comparisons with state of the art techniques, on both synthetic and real data, show promising performances.
Resumo:
This paper reports an LC-MS/MS method with positive electrospray ionization for the screening of commonly prescribed cardiovascular drugs in human plasma, including compounds with antihypertensive (57), antidiabetic (12), hypolipemiant (5), anticoagulant (2) and platelet anti-aggregation (2) effects. Sample treatment consisted of a simple protein precipitation with MeOH/0.1 M ZnSO₄ (4:1, v/v) solution after the addition of internal standard, followed by evaporation and reconstitution. Analytes separation was performed on a Polar-RP column (150 m x 2 mm, 4 μm) using a gradient elution of 15 min. The MS system was operated in MRM mode, monitoring one quantitation and one confirmation transition for each analyte. The recovery of the protein precipitation step ranged from 50 to 70% for most of the compounds, while some were considerably affected by matrix effects. Since several analytes fulfilled the linearity, accuracy and precision values required by the ICH guidelines, the method proved to be suitable for their quantitative analysis. The limits of quantitation varied from 0.38 to 9.1 μg/L and the limits of detection from 0.12 to 5.34 μg/L. The method showed to be suitable for the detection of plasma samples of patients under cardiovascular treatment with the studied drugs, and for 55 compounds reliable quantitative results could be obtained.
Resumo:
A liquid chromatography tandem mass spectrometry (LC-MS/MS) confirmatory method for the simultaneous determination of nine corticosteroids in liver, including the four MRL compounds listed in Council Regulation 37/2010, was developed. After an enzymatic deconjugation and a solvent extraction of the liver tissue, the resulting solution was cleaned up through an SPE Oasis HLB cartridge. The analytes were then detected by liquid chromatography-negative-ion electrospray tandem mass spectrometry, using deuterium-labelled internal standards. The procedure was validated as a quantitative confirmatory method according to the Commission Decision 2002/657/EC criteria. The results showed that the method was suitable for statutory residue testing regarding the following performance characteristics: instrumental linearity, specificity, precision (repeatability and intra-laboratory reproducibility), recovery, decision limit (CCα), detection capability (CCβ) and ruggedness. All the corticosteroids can be detected at a concentration around 1 μg kg(-1); the recoveries were above 62% for all the analytes. Repeatability and reproducibility (within-laboratory reproducibility) for all the analytes were below 7.65% and 15.5%, respectively.
Resumo:
BACKGROUND: In order to optimise the cost-effectiveness of active surveillance to substantiate freedom from disease, a new approach using targeted sampling of farms was developed and applied on the example of infectious bovine rhinotracheitis (IBR) and enzootic bovine leucosis (EBL) in Switzerland. Relevant risk factors (RF) for the introduction of IBR and EBL into Swiss cattle farms were identified and their relative risks defined based on literature review and expert opinions. A quantitative model based on the scenario tree method was subsequently used to calculate the required sample size of a targeted sampling approach (TS) for a given sensitivity. We compared the sample size with that of a stratified random sample (sRS) with regard to efficiency. RESULTS: The required sample sizes to substantiate disease freedom were 1,241 farms for IBR and 1,750 farms for EBL to detect 0.2% herd prevalence with 99% sensitivity. Using conventional sRS, the required sample sizes were 2,259 farms for IBR and 2,243 for EBL. Considering the additional administrative expenses required for the planning of TS, the risk-based approach was still more cost-effective than a sRS (40% reduction on the full survey costs for IBR and 8% for EBL) due to the considerable reduction in sample size. CONCLUSIONS: As the model depends on RF selected through literature review and was parameterised with values estimated by experts, it is subject to some degree of uncertainty. Nevertheless, this approach provides the veterinary authorities with a promising tool for future cost-effective sampling designs.
Resumo:
Stemmatology, or the reconstruction of the transmission history of texts, is a field that stands particularly to gain from digital methods. Many scholars already take stemmatic approaches that rely heavily on computational analysis of the collated text (e.g. Robinson and O’Hara 1996; Salemans 2000; Heikkilä 2005; Windram et al. 2008 among many others). Although there is great value in computationally assisted stemmatology, providing as it does a reproducible result and allowing access to the relevant methodological process in related fields such as evolutionary biology, computational stemmatics is not without its critics. The current state-of-the-art effectively forces scholars to choose between a preconceived judgment of the significance of textual differences (the Lachmannian or neo-Lachmannian approach, and the weighted phylogenetic approach) or to make no judgment at all (the unweighted phylogenetic approach). Some basis for judgment of the significance of variation is sorely needed for medieval text criticism in particular. By this, we mean that there is a need for a statistical empirical profile of the text-genealogical significance of the different sorts of variation in different sorts of medieval texts. The rules that apply to copies of Greek and Latin classics may not apply to copies of medieval Dutch story collections; the practices of copying authoritative texts such as the Bible will most likely have been different from the practices of copying the Lives of local saints and other commonly adapted texts. It is nevertheless imperative that we have a consistent, flexible, and analytically tractable model for capturing these phenomena of transmission. In this article, we present a computational model that captures most of the phenomena of text variation, and a method for analysis of one or more stemma hypotheses against the variation model. We apply this method to three ‘artificial traditions’ (i.e. texts copied under laboratory conditions by scholars to study the properties of text variation) and four genuine medieval traditions whose transmission history is known or deduced in varying degrees. Although our findings are necessarily limited by the small number of texts at our disposal, we demonstrate here some of the wide variety of calculations that can be made using our model. Certain of our results call sharply into question the utility of excluding ‘trivial’ variation such as orthographic and spelling changes from stemmatic analysis.
Resumo:
Images of an object under different illumination are known to provide strong cues about the object surface. A mathematical formalization of how to recover the normal map of such a surface leads to the so-called uncalibrated photometric stereo problem. In the simplest instance, this problem can be reduced to the task of identifying only three parameters: the so-called generalized bas-relief (GBR) ambiguity. The challenge is to find additional general assumptions about the object, that identify these parameters uniquely. Current approaches are not consistent, i.e., they provide different solutions when run multiple times on the same data. To address this limitation, we propose exploiting local diffuse reflectance (LDR) maxima, i.e., points in the scene where the normal vector is parallel to the illumination direction (see Fig. 1). We demonstrate several noteworthy properties of these maxima: a closed-form solution, computational efficiency and GBR consistency. An LDR maximum yields a simple closed-form solution corresponding to a semi-circle in the GBR parameters space (see Fig. 2); because as few as two diffuse maxima in different images identify a unique solution, the identification of the GBR parameters can be achieved very efficiently; finally, the algorithm is consistent as it always returns the same solution given the same data. Our algorithm is also remarkably robust: It can obtain an accurate estimate of the GBR parameters even with extremely high levels of outliers in the detected maxima (up to 80 % of the observations). The method is validated on real data and achieves state-of-the-art results.
Resumo:
Rockfall is a widespread and hazardous process in mountain environments, but data on past events are only rarely available. Growth-ring series from trees impacted by rockfall were successfully used in the past to overcome the lack of archival records. Dendrogeomorphic techniques have been demonstrated to allow very accurate dating and reconstruction of spatial and temporal rockfall activity, but the approach has been cited to be labor intensive and time consuming. In this study, we present a simplified method to quantify rockfall processes on forested slopes requiring less time and efforts. The approach is based on a counting of visible scars on the stem surface of Common beech (Fagus sylvatica L.). Data are presented from a site in the Inn valley (Austria), where rocks are frequently detached from an ~ 200-m-high, south-facing limestone cliff. We compare results obtained from (i) the “classical” analysis of growth disturbances in the tree-ring series of 33 Norway spruces (Picea abies (L.) Karst.) and (ii) data obtained with a scar count on the stem surface of 50 F. sylvatica trees. A total of 277 rockfall events since A.D. 1819 could be reconstructed from tree-ring records of P. abies, whereas 1140 scars were observed on the stem surface of F. sylvatica. Absolute numbers of rockfalls (and hence return intervals) vary significantly between the approaches, and the mean number of rockfalls observed on the stem surface of F. sylvatica exceeds that of P. abies by a factor of 2.7. On the other hand, both methods yield comparable data on the spatial distribution of relative rockfall activity. Differences may be explained by a great portion of masked scars in P. abies and the conservation of signs of impacts on the stem of F. sylvatica. Besides, data indicate that several scars on the bark of F. sylvatica may stem from the same impact and thus lead to an overestimation of rockfall activity.
Resumo:
We investigate a class of optimal control problems that exhibit constant exogenously given delays in the control in the equation of motion of the differential states. Therefore, we formulate an exemplary optimal control problem with one stock and one control variable and review some analytic properties of an optimal solution. However, analytical considerations are quite limited in case of delayed optimal control problems. In order to overcome these limits, we reformulate the problem and apply direct numerical methods to calculate approximate solutions that give a better understanding of this class of optimization problems. In particular, we present two possibilities to reformulate the delayed optimal control problem into an instantaneous optimal control problem and show how these can be solved numerically with a stateof- the-art direct method by applying Bock’s direct multiple shooting algorithm. We further demonstrate the strength of our approach by two economic examples.
Resumo:
An increasing number of clubs experience difficulties in recruiting and retaining sufficient numbers of volunteers to manage and staff their clubs (Lamprecht, Fischer, & Stamm, 2012). In order to facilitate volunteer recruitment, sport clubs need a specific strategy to recruit and retain volunteers for both formal positions and ad hoc tasks. Therefore, the intervention “More Volunteers in Football Clubs” was designed and its impact was evaluated in detail. The question this evaluation research wants to address is: Can football clubs recruit and retain volunteers successfully by implementing the intervention “More Volunteers in Football Clubs”? The designed intervention is based on the different expectations and needs of volunteers, as well as non-profit human resource management and organisational development management, with a strong emphasis on club-specific counseling and support. Task forces of the twelve participating football clubs attended four workshops in which they received tailor made counseling to reach the desired number of volunteers. The intervention has been implemented and its effectiveness tested in cooperation with the Swiss Football Federation with twelve Swiss football clubs following a pretest, intervention, posttest design Data have been gathered and analysed using a combination of qualitative and quantitative methods. Outcome measurements are: volunteer rate, number of recruited volunteers, number of filled volunteer positions and volunteer satisfaction. Four months after the intervention all clubs that completed the proposed intervention were successful in recruiting the desired number of volunteers. Further, all participating clubs found the intervention helpful and would recommend other clubs to participate as well. With the development of this practical intervention a solution for football clubs is provided to overcome the difficulties in recruiting and retaining sufficient numbers of volunteers. Lamprecht, M., Fischer, A., & Stamm, H.-P. (2012). Sportvereine in der Schweiz. Strukturen, Leistungen, Herausforderungen. Zürich, Switzerland: Seismo.
Resumo:
BACKGROUND Record linkage of existing individual health care data is an efficient way to answer important epidemiological research questions. Reuse of individual health-related data faces several problems: Either a unique personal identifier, like social security number, is not available or non-unique person identifiable information, like names, are privacy protected and cannot be accessed. A solution to protect privacy in probabilistic record linkages is to encrypt these sensitive information. Unfortunately, encrypted hash codes of two names differ completely if the plain names differ only by a single character. Therefore, standard encryption methods cannot be applied. To overcome these challenges, we developed the Privacy Preserving Probabilistic Record Linkage (P3RL) method. METHODS In this Privacy Preserving Probabilistic Record Linkage method we apply a three-party protocol, with two sites collecting individual data and an independent trusted linkage center as the third partner. Our method consists of three main steps: pre-processing, encryption and probabilistic record linkage. Data pre-processing and encryption are done at the sites by local personnel. To guarantee similar quality and format of variables and identical encryption procedure at each site, the linkage center generates semi-automated pre-processing and encryption templates. To retrieve information (i.e. data structure) for the creation of templates without ever accessing plain person identifiable information, we introduced a novel method of data masking. Sensitive string variables are encrypted using Bloom filters, which enables calculation of similarity coefficients. For date variables, we developed special encryption procedures to handle the most common date errors. The linkage center performs probabilistic record linkage with encrypted person identifiable information and plain non-sensitive variables. RESULTS In this paper we describe step by step how to link existing health-related data using encryption methods to preserve privacy of persons in the study. CONCLUSION Privacy Preserving Probabilistic Record linkage expands record linkage facilities in settings where a unique identifier is unavailable and/or regulations restrict access to the non-unique person identifiable information needed to link existing health-related data sets. Automated pre-processing and encryption fully protect sensitive information ensuring participant confidentiality. This method is suitable not just for epidemiological research but also for any setting with similar challenges.
Resumo:
Background: The Swiss pig population enjoys a favourable health situation. To further promote this, the Pig Health Service (PHS) conducts a surveillance program in affiliated herds: closed multiplier herds with the highest PHS-health and hygiene status have to be free from swine dysentery and progressive atrophic rhinitis and are clinically examined four times a year, including laboratory testing. Besides, four batches of pigs per year are fattened together with pigs from other herds and checked for typical symptoms (monitored fattening groups (MF)). While costly and laborious, little was known about the effectiveness of the surveillance to detect an infection in a herd. Therefore, the sensitivity of the surveillance for progressive atrophic rhinitis and swine dysentery at herd level was assessed using scenario tree modelling, a method well established at national level. Furthermore, its costs and the time until an infection would be detected were estimated, with the final aim of yielding suggestions how to optimize surveillance. Results: For swine dysentery, the median annual surveillance sensitivity was 96.7 %, mean time to detection 4.4 months, and total annual costs 1022.20 Euro/herd. The median component sensitivity of active sampling was between 62.5 and 77.0 %, that of a MF between 7.2 and 12.7 %. For progressive atrophic rhinitis, the median surveillance sensitivity was 99.4 %, mean time to detection 3.1 months and total annual costs 842.20 Euro. The median component sensitivity of active sampling was 81.7 %, that of a MF between 19.4 and 38.6 %. Conclusions: Results indicate that total sensitivity for both diseases is high, while time to detection could be a risk in herds with frequent pig trade. From all components, active sampling had the highest contribution to the surveillance sensitivity, whereas that of MF was very low. To increase efficiency, active sampling should be intensified (more animals sampled) and MF abandoned. This would significantly improve sensitivity and time to detection at comparable or lower costs. The method of scenario tree modelling proved useful to assess the efficiency of surveillance at herd level. Its versatility allows adjustment to all kinds of surveillance scenarios to optimize sensitivity, time to detection and/or costs.