909 resultados para Solving Problems for Evidence
Resumo:
Objectives: Serra da Estrela native (SEN) breed of sheep is one of the most important in Portugal, standing responsible for the production of one of the most internationally recognized cheeses in the world, "Queijo Serra da Estrela” (protected designation of origin). One of the major risks to the sustainability of SEN dairy flocks if footrot, an infectious diseases that causes lameness, decrease in milk production, weight loss and decrease in fertility. The aim of this work was to determine which parameters become decisive for the appearance of footrot in SEN dairy flocks, while establishing associations with environmental and nutritional variables. Materials and Methods: The present study was carried out by performing an especially designed 27 multiple choice questionnaire, based on the underlying causes of lameness in livestock and in the clinical diagnosis performed by the vet techinichian of the clinical cases evaluated at the moment of the inquiry. The survey was performed during the execution of the oficial veterinary health program between February and September of 2014 by a veterinarian team from the Association of SEN Sheep Breeders (ANCOSE). The ovine producers (N=30, with a total of 1270 animals) were randomly selected from the extense area of production of “Queijo Serra da Estrela”. The parameters evaluated in the study were: season and consequent weather changes during the period of the study, floor types, hygiene conditions, bed types, the existence and use of footbaths, location of footbaths, foot trimming and foot hygiene procedures. After the construction of the database and using the Statistical Package for Social Sciences version 16.0 the frequency response for each item was calculated. Results: All SEN livestock producers are proactive in the treatment of lameness (70%). About 99.7% of lameness was related with footroot and most appeared in winter (36.7%). In some occasions there was not a specific season distribution (56.7% rainiest years). From the analyzed farms, 70% use straw as a floor bed, followed by bush (21%). 45.6% of animals were clinically diagnosed with footrot and were sheltered in straw floor. Bed quality is good in 40%, however frequent changes in the floor bed in the preceding 15 days had a higher rate of footroot diagnosed cases (33,23%) compared to monthly changes. Regarding sheepfold animal number, the distribution is proper in 36.7%, elevated in 30%, overcrowded in 6.7% and low in 23.3%. Clinically ill diagnosed animals in last category were the lowest observed (3%). Concerning hoof trimming, 76.7% trims while 23.3% reported not to perform that task. From those that perform trimming, 73.9% do it only when necessary, 21.7% at the time of clipping and 4.4% when the animal is affected. One curious result was that animals who perform trimming at clipping had higher footrot cases (52.6%), unlike those who trim only when necessary (40.2%) or even when animals are clinically ill (0.91%). Mostly all in the presence of footrot choose a local treatment (95.2%) using antibiotic sprays, instead of parenteral antibiotic treatment. Footbath is rarely used in the prevention (13.3%) of this disease and when it is misconceived (25%) and incorrectly formulated (100%). Conclusions: This study was the first performed in Portugal focusing footrot caracterization in native SEN flocks.Economic impact in SEN dairy flocks is atributed to factors such as direct decrease in milk and meat production,early slaughter of affected animals due to non responsive treatment and medical expenses.The most effective eradication method combines first of all the ability to understand the problems of SEN producers and implementation of preventive measures and treatment of footrot.Often linked to lack of formation, the advance age of portuguese producers the mentality and social factors block the advance of veterinarian intervention itself.
Resumo:
It is a fact that the uncertainty about a firm’s future has to be measured and incorporated into a company’s valuation throughout the explicit analysis period – in the continuing or terminal value within valuation models. One of the concerns that can influence the continuing value of enterprises, which is not explicitly considered in traditional valuation models, is a firm’s average life expectancy. Although the literature has studied the life cycle of a firm, there is still a considerable lack of references on this topic. If we ignore the period during which a company has the ability to produce future cash flows, the valuations can fall into irreversible errors, leading to results markedly different from market values. This paper aims to provide a contribution in this area. Its main objective is to construct a mortality table for non-listed Portuguese enterprises, showing that the use of a terminal value through a mathematical expression of perpetuity of free cash flows is not adequate. We provide the use of an appropriate coefficient to perceive the number of years in which the company will continue to operate until its theoretical extinction. If well addressed regarding valuation models, this issue can be used to reduce or even to eliminate one of the main problems that cause distortions in contemporary enterprise valuation models: the premise of an enterprise’s unlimited existence in time. Besides studying the companies involved in it, from their existence to their demise, our study intends to push knowledge forward by providing a consistent life and mortality expectancy table for each age of the company, presenting models with an explicitly and different survival rate for each year. Moreover, we show that, after reaching a certain age, firms can reinvent their business, acquiring maturity and consequently postponing their mortality through an additional life period.
Resumo:
This paper investigates three decision problems with potential to optimize operation and maintenance and logistics strategies for offshore wind farms: the timing of pre-determined jack-up vessel campaigns; selection of crew transfer vessel fleet; and timing of annual services. These problems are compared both in terms of potential cost reduction and the stochastic variability and associated uncertainty of the outcome. Pre-determined jack-up vessel campaigns appear to have a high cost reduction potential but also a higher stochastic variability than the other decision problems. The paper also demonstrates the benefits and difficulties of considering problems together rather than solving them in isolation.
Resumo:
Due to the rapid changes that governs the Swedish financial sector such as financial deregulations and technological innovations, it is imperative to examine the extent to which the Swedish Financial institutions had performed amid these changes. For this to be accomplish, the work investigates what are the determinants of performance for Swedish Financial Monetary Institutions? Assumptions were derived from theoretical and empirical literatures to investigate the authenticity of this research question using seven explanatory variables. Two models were specified using Returns on Asset (ROA) and Return on Equity (ROE) as the main performance indicators and for the sake of reliability and validity, three different estimators such as Ordinary Least Square (OLS), Generalized Least Square (GLS) and Feasible Generalized Least Square (FGLS) were employed. The Akaike Information Criterion (AIC) was also used to verify which specification explains performance better while performing robustness check of parameter estimates was done by correcting for standard errors. Based on the findings, ROA specification proves to have the lowest Akaike Information Criterion (AIC) and Standard errors compared to ROE specification. Under ROA, two variables; the profit margins and the Interest coverage ratio proves to be statistically significant while under ROE just the interest coverage ratio (ICR) for all the estimators proves significant. The result also shows that the FGLS is the most efficient estimator, then follows the GLS and the last OLS. when corrected for SE robust, the gearing ratio which measures the capital structure becomes significant under ROA and its estimate become positive under ROE robust. Conclusions were drawn that, within the period of study three variables (ICR, profit margins and gearing) shows significant and four variables were insignificant. The overall findings show that the institutions strive to their best to maximize returns but these returns were just normal to cover their costs of operation. Much should be done as per the ASC theory to avoid liquidity and credit risks problems. Again, estimated values of ICR and profit margins shows that a considerable amount of efforts with sound financial policies are required to increase performance by one percentage point. Areas of further research could be how the individual stochastic factors such as the Dupont model, repo rates, inflation, GDP etc. can influence performance.
Resumo:
he aims of this cross-sectional study were to examine the developmental characteristics (biological maturation and body size) associated with gross motor coordination problems in 5193 Peruvian children (2787 girls) aged 6–14 years from different geographical locations, and to investigate how the probability that children suffer with gross motor coordination problems varies with physical fitness. Children with gross motor coordination problems were more likely to have lower flexibility and explosive strength levels, having adjusted for age, sex, maturation and study site. Older children were more likely to suffer from gross motor coordination problems, as were those with greater body mass index. However, more mature children were less likely to have gross motor coordination problems, although children who live at sea level or at high altitude were more likely to suffer from gross motor coordination problems than children living in the jungle. Our results provide evidence that children and adolescents with lower physical fitness are more likely to have gross motor coordination difficulties. The identification of youths with gross motor coordination problems and providing them with effective intervention programs is an important priority in order to overcome such developmental problems, and help to improve their general health status.
Resumo:
Solving a complex Constraint Satisfaction Problem (CSP) is a computationally hard task which may require a considerable amount of time. Parallelism has been applied successfully to the job and there are already many applications capable of harnessing the parallel power of modern CPUs to speed up the solving process. Current Graphics Processing Units (GPUs), containing from a few hundred to a few thousand cores, possess a level of parallelism that surpasses that of CPUs and there are much less applications capable of solving CSPs on GPUs, leaving space for further improvement. This paper describes work in progress in the solving of CSPs on GPUs, CPUs and other devices, such as Intel Many Integrated Cores (MICs), in parallel. It presents the gains obtained when applying more devices to solve some problems and the main challenges that must be faced when using devices with as different architectures as CPUs and GPUs, with a greater focus on how to effectively achieve good load balancing between such heterogeneous devices.
Resumo:
In recent years, radars have been used in many applications such as precision agriculture and advanced driver assistant systems. Optimal techniques for the estimation of the number of targets and of their coordinates require solving multidimensional optimization problems entailing huge computational efforts. This has motivated the development of sub-optimal estimation techniques able to achieve good accuracy at a manageable computational cost. Another technical issue in advanced driver assistant systems is the tracking of multiple targets. Even if various filtering techniques have been developed, new efficient and robust algorithms for target tracking can be devised exploiting a probabilistic approach, based on the use of the factor graph and the sum-product algorithm. The two contributions provided by this dissertation are the investigation of the filtering and smoothing problems from a factor graph perspective and the development of efficient algorithms for two and three-dimensional radar imaging. Concerning the first contribution, a new factor graph for filtering is derived and the sum-product rule is applied to this graphical model; this allows to interpret known algorithms and to develop new filtering techniques. Then, a general method, based on graphical modelling, is proposed to derive filtering algorithms that involve a network of interconnected Bayesian filters. Finally, the proposed graphical approach is exploited to devise a new smoothing algorithm. Numerical results for dynamic systems evidence that our algorithms can achieve a better complexity-accuracy tradeoff and tracking capability than other techniques in the literature. Regarding radar imaging, various algorithms are developed for frequency modulated continuous wave radars; these algorithms rely on novel and efficient methods for the detection and estimation of multiple superimposed tones in noise. The accuracy achieved in the presence of multiple closely spaced targets is assessed on the basis of both synthetically generated data and of the measurements acquired through two commercial multiple-input multiple-output radars.
Resumo:
Inverse problems are at the core of many challenging applications. Variational and learning models provide estimated solutions of inverse problems as the outcome of specific reconstruction maps. In the variational approach, the result of the reconstruction map is the solution of a regularized minimization problem encoding information on the acquisition process and prior knowledge on the solution. In the learning approach, the reconstruction map is a parametric function whose parameters are identified by solving a minimization problem depending on a large set of data. In this thesis, we go beyond this apparent dichotomy between variational and learning models and we show they can be harmoniously merged in unified hybrid frameworks preserving their main advantages. We develop several highly efficient methods based on both these model-driven and data-driven strategies, for which we provide a detailed convergence analysis. The arising algorithms are applied to solve inverse problems involving images and time series. For each task, we show the proposed schemes improve the performances of many other existing methods in terms of both computational burden and quality of the solution. In the first part, we focus on gradient-based regularized variational models which are shown to be effective for segmentation purposes and thermal and medical image enhancement. We consider gradient sparsity-promoting regularized models for which we develop different strategies to estimate the regularization strength. Furthermore, we introduce a novel gradient-based Plug-and-Play convergent scheme considering a deep learning based denoiser trained on the gradient domain. In the second part, we address the tasks of natural image deblurring, image and video super resolution microscopy and positioning time series prediction, through deep learning based methods. We boost the performances of supervised, such as trained convolutional and recurrent networks, and unsupervised deep learning strategies, such as Deep Image Prior, by penalizing the losses with handcrafted regularization terms.
Resumo:
Latency can be defined as the sum of the arrival times at the customers. Minimum latency problems are specially relevant in applications related to humanitarian logistics. This thesis presents algorithms for solving a family of vehicle routing problems with minimum latency. First the latency location routing problem (LLRP) is considered. It consists of determining the subset of depots to be opened, and the routes that a set of homogeneous capacitated vehicles must perform in order to visit a set of customers such that the sum of the demands of the customers assigned to each vehicle does not exceed the capacity of the vehicle. For solving this problem three metaheuristic algorithms combining simulated annealing and variable neighborhood descent, and an iterated local search (ILS) algorithm, are proposed. Furthermore, the multi-depot cumulative capacitated vehicle routing problem (MDCCVRP) and the multi-depot k-traveling repairman problem (MDk-TRP) are solved with the proposed ILS algorithm. The MDCCVRP is a special case of the LLRP in which all the depots can be opened, and the MDk-TRP is a special case of the MDCCVRP in which the capacity constraints are relaxed. Finally, a LLRP with stochastic travel times is studied. A two-stage stochastic programming model and a variable neighborhood search algorithm are proposed for solving the problem. Furthermore a sampling method is developed for tackling instances with an infinite number of scenarios. Extensive computational experiments show that the proposed methods are effective for solving the problems under study.
Resumo:
My thesis falls within the framework of physics education and teaching of mathematics. The objective of this report was made possible by using geometrical (in mathematics) and qualitative (in physics) problems. We have prepared four (resp. three) open answer exercises for mathematics (resp. physics). The test batch has been selected across two different school phases: end of the middle school (third year, 8\textsuperscript{th} grade) and beginning of high school (second and third year, 10\textsuperscript{th} and 11\textsuperscript{th} grades respectively). High school students achieved the best results in almost every problem, but 10\textsuperscript{th} grade students got the best overall results. Moreover, a clear tendency to not even try qualitative problems resolution has emerged from the first collection of graphs, regardless of subject and grade. In order to improve students' problem-solving skills, it is worth to invest on vertical learning and spiral curricula. It would make sense to establish a stronger and clearer connection between physics and mathematical knowledge through an interdisciplinary approach.
Resumo:
Modern High-Performance Computing HPC systems are gradually increasing in size and complexity due to the correspondent demand of larger simulations requiring more complicated tasks and higher accuracy. However, as side effects of the Dennard’s scaling approaching its ultimate power limit, the efficiency of software plays also an important role in increasing the overall performance of a computation. Tools to measure application performance in these increasingly complex environments provide insights into the intricate ways in which software and hardware interact. The monitoring of the power consumption in order to save energy is possible through processors interfaces like Intel Running Average Power Limit RAPL. Given the low level of these interfaces, they are often paired with an application-level tool like Performance Application Programming Interface PAPI. Since several problems in many heterogeneous fields can be represented as a complex linear system, an optimized and scalable linear system solver algorithm can decrease significantly the time spent to compute its resolution. One of the most widely used algorithms deployed for the resolution of large simulation is the Gaussian Elimination, which has its most popular implementation for HPC systems in the Scalable Linear Algebra PACKage ScaLAPACK library. However, another relevant algorithm, which is increasing in popularity in the academic field, is the Inhibition Method. This thesis compares the energy consumption of the Inhibition Method and Gaussian Elimination from ScaLAPACK to profile their execution during the resolution of linear systems above the HPC architecture offered by CINECA. Moreover, it also collates the energy and power values for different ranks, nodes, and sockets configurations. The monitoring tools employed to track the energy consumption of these algorithms are PAPI and RAPL, that will be integrated with the parallel execution of the algorithms managed with the Message Passing Interface MPI.
Resumo:
32
Resumo:
Substantial complexity has been introduced into treatment regimens for patients with human immunodeficiency virus (HIV) infection. Many drug-related problems (DRPs) are detected in these patients, such as low adherence, therapeutic inefficacy, and safety issues. We evaluated the impact of pharmacist interventions on CD4+ T-lymphocyte count, HIV viral load, and DRPs in patients with HIV infection. In this 18-month prospective controlled study, 90 outpatients were selected by convenience sampling from the Hospital Dia-University of Campinas Teaching Hospital (Brazil). Forty-five patients comprised the pharmacist intervention group and 45 the control group; all patients had HIV infection with or without acquired immunodeficiency syndrome. Pharmaceutical appointments were conducted based on the Pharmacotherapy Workup method, although DRPs and pharmacist intervention classifications were modified for applicability to institutional service limitations and research requirements. Pharmacist interventions were performed immediately after detection of DRPs. The main outcome measures were DRPs, CD4+ T-lymphocyte count, and HIV viral load. After pharmacist intervention, DRPs decreased from 5.2 (95% confidence interval [CI] =4.1-6.2) to 4.2 (95% CI =3.3-5.1) per patient (P=0.043). A total of 122 pharmacist interventions were proposed, with an average of 2.7 interventions per patient. All the pharmacist interventions were accepted by physicians, and among patients, the interventions were well accepted during the appointments, but compliance with the interventions was not measured. A statistically significant increase in CD4+ T-lymphocyte count in the intervention group was found (260.7 cells/mm(3) [95% CI =175.8-345.6] to 312.0 cells/mm(3) [95% CI =23.5-40.6], P=0.015), which was not observed in the control group. There was no statistical difference between the groups regarding HIV viral load. This study suggests that pharmacist interventions in patients with HIV infection can cause an increase in CD4+ T-lymphocyte counts and a decrease in DRPs, demonstrating the importance of an optimal pharmaceutical care plan.
Resumo:
The taxonomic status of a disjunctive population of Phyllomedusa from southern Brazil was diagnosed using molecular, chromosomal, and morphological approaches, which resulted in the recognition of a new species of the P. hypochondrialis group. Here, we describe P. rustica sp. n. from the Atlantic Forest biome, found in natural highland grassland formations on a plateau in the south of Brazil. Phylogenetic inferences placed P. rustica sp. n. in a subclade that includes P. rhodei + all the highland species of the clade. Chromosomal morphology is conservative, supporting the inference of homologies among the karyotypes of the species of this genus. Phyllomedusa rustica is apparently restricted to its type-locality, and we discuss the potential impact on the strategies applied to the conservation of the natural grassland formations found within the Brazilian Atlantic Forest biome in southern Brazil. We suggest that conservation strategies should be modified to guarantee the preservation of this species.
Resumo:
Transfer of reaction products formed on the surfaces of two mutually rubbed dielectric solids makes an important if not dominating contribution to triboelectricity. New evidence in support of this statement is presented in this report, based on analytical electron microscopy coupled to electrostatic potential mapping techniques. Mechanical action on contacting surface asperities transforms them into hot-spots for free-radical formation, followed by electron transfer producing cationic and anionic polymer fragments, according to their electronegativity. Polymer ions accumulate creating domains with excess charge because they are formed at fracture surfaces of pulled-out asperities. Another factor for charge segregation is the low polymer mixing entropy, following Flory and Huggins. The formation of fractal charge patterns that was previously described is thus the result of polymer fragment fractal scatter on both contacting surfaces. The present results contribute to the explanation of the centuries-old difficulties for understanding the triboelectric series and triboelectricity in general, as well as the dissipative nature of friction, and they may lead to better control of friction and its consequences.