948 resultados para best estimate method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Each year, thousands of adolescents are processed through the juvenile justice system -- a system that is complicated, expensive, and inadequately addressing the needs of the youth in its care. While there is extensive literature available in support of interventions for youthful offenders that are clinically superior to current care and more cost-effective than the existing structure, there is a gap between research and practice that is preventing their implementation. The use of Evidence-Based Practice in Psychology (EBPP) as defined by the American Psychological Association is presented as one method to bridge this gap. This paper identifies and discusses each of five barriers to effective use of EBPP: cost, fragmentation of the mental health system, historical and systemic variables, research methodology, and clinician variables. These barriers are first defined and then illustrated using examples from the author's experience working in the juvenile justice field. Finally, recommendations for the field are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coastal erosion is an important and constant issue facing coastal areas all over the world today. The rate of coastal development over the years has increased, in turn requiring that action be taken to protect structures from the threat of erosion. A review of the causes of coastal erosion and the methods implemented to control it was conducted in order to determine the best course of action in response to coastal erosion issues. The potential positive and negative economic and environmental impacts are key concerns in determining whether or not to restore an eroding beach and which erosion control method(s) to implement. Results focus on providing a comparison of these concerns as well as recommendations for addressing coastal erosion issues.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present different error measurements with the aim to evaluate the quality of the approximations generated by the GNG3D method for mesh simplification. The first phase of this method consists on the execution of the GNG3D algorithm, described in the paper. The primary goal of this phase is to obtain a simplified set of vertices representing the best approximation of the original 3D object. In the reconstruction phase we use the information provided by the optimization algorithm to reconstruct the faces thus obtaining the optimized mesh. The implementation of three error functions, named Eavg, Emax, Esur, permitts us to control the error of the simplified model, as it is shown in the examples studied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Characterization of sound absorbing materials is essential to predict its acoustic behaviour. The most commonly used models to do so consider the flow resistivity, porosity, and average fibre diameter as parameters to determine the acoustic impedance and sound absorbing coefficient. Besides direct experimental techniques, numerical approaches appear to be an alternative to estimate the material’s parameters. In this work an inverse numerical method to obtain some parameters of a fibrous material is presented. Using measurements of the normal incidence sound absorption coefficient and then using the model proposed by Voronina, subsequent application of basic minimization techniques allows one to obtain the porosity, average fibre diameter and density of a sound absorbing material. The numerical results agree fairly well with the experimental data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several recent works deal with 3D data in mobile robotic problems, e.g., mapping. Data comes from any kind of sensor (time of flight, Kinect or 3D lasers) that provide a huge amount of unorganized 3D data. In this paper we detail an efficient approach to build complete 3D models using a soft computing method, the Growing Neural Gas (GNG). As neural models deal easily with noise, imprecision, uncertainty or partial data, GNG provides better results than other approaches. The GNG obtained is then applied to a sequence. We present a comprehensive study on GNG parameters to ensure the best result at the lowest time cost. From this GNG structure, we propose to calculate planar patches and thus obtaining a fast method to compute the movement performed by a mobile robot by means of a 3D models registration algorithm. Final results of 3D mapping are also shown.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thermal degradation of PLA is a complex process since it comprises many simultaneous reactions. The use of analytical techniques, such as differential scanning calorimetry (DSC) and thermogravimetry (TGA), yields useful information but a more sensitive analytical technique would be necessary to identify and quantify the PLA degradation products. In this work the thermal degradation of PLA at high temperatures was studied by using a pyrolyzer coupled to a gas chromatograph with mass spectrometry detection (Py-GC/MS). Pyrolysis conditions (temperature and time) were optimized in order to obtain an adequate chromatographic separation of the compounds formed during heating. The best resolution of chromatographic peaks was obtained by pyrolyzing the material from room temperature to 600 °C during 0.5 s. These conditions allowed identifying and quantifying the major compounds produced during the PLA thermal degradation in inert atmosphere. The strategy followed to select these operation parameters was by using sequential pyrolysis based on the adaptation of mathematical models. By application of this strategy it was demonstrated that PLA is degraded at high temperatures by following a non-linear behaviour. The application of logistic and Boltzmann models leads to good fittings to the experimental results, despite the Boltzmann model provided the best approach to calculate the time at which 50% of PLA was degraded. In conclusion, the Boltzmann method can be applied as a tool for simulating the PLA thermal degradation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background and objective: In this paper, we have tested the suitability of using different artificial intelligence-based algorithms for decision support when classifying the risk of congenital heart surgery. In this sense, classification of those surgical risks provides enormous benefits as the a priori estimation of surgical outcomes depending on either the type of disease or the type of repair, and other elements that influence the final result. This preventive estimation may help to avoid future complications, or even death. Methods: We have evaluated four machine learning algorithms to achieve our objective: multilayer perceptron, self-organizing map, radial basis function networks and decision trees. The architectures implemented have the aim of classifying among three types of surgical risk: low complexity, medium complexity and high complexity. Results: Accuracy outcomes achieved range between 80% and 99%, being the multilayer perceptron method the one that offered a higher hit ratio. Conclusions: According to the results, it is feasible to develop a clinical decision support system using the evaluated algorithms. Such system would help cardiology specialists, paediatricians and surgeons to forecast the level of risk related to a congenital heart disease surgery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Especially after the entry into force and subsequent implementation of the Lisbon Treaty, the traditional distinction (and opposition) between the so-called 'community' and 'inter-governmental' methods in EU policy-making is less and less relevant. Most common policies entail a 'mix' between them and different degrees of mutual contamination. Even the 'Union method' recently proposed by Chancellor Angela Merkel raises more questions than it solves – although it may trigger a constructive debate on how best to address today's policy challenges.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many different methods based on both planned inspection and health inspection for estimate of electrical equipment health are used. The estimation method of residual life of electric motors by their health in pulp and paper industry is considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE To identify the prevalence and progression of macular atrophy (MA) in neovascular age-related macular degeneration (AMD) patients under long-term anti-vascular endothelial growth factor (VEGF) therapy and to determine risk factors. METHOD This retrospective study included patients with neovascular AMD and ≥30 anti-VEGF injections. Macular atrophy (MA) was measured using near infrared and spectral-domain optical coherence tomography (SD-OCT). Yearly growth rate was estimated using square-root transformation to adjust for baseline area and allow for linearization of growth rate. Multiple regression with Akaike information criterion (AIC) as model selection criterion was used to estimate the influence of various parameters on MA area. RESULTS Forty-nine eyes (47 patients, mean age 77 ± 14) were included with a mean of 48 ± 13 intravitreal anti-VEGF injections (ranibizumab:37 ± 11, aflibercept:11 ± 6, mean number of injections/year 8 ± 2.1) over a mean treatment period of 6.2 ± 1.3 years (range 4-8.5). Mean best-corrected visual acuity improved from 57 ± 17 letters at baseline (= treatment start) to 60 ± 16 letters at last follow-up. The MA prevalence within and outside the choroidal neovascularization (CNV) border at initial measurement was 45% and increased to 74%. Mean MA area increased from 1.8 ± 2.7 mm(2) within and 0.5 ± 0.98 mm(2) outside the CNV boundary to 2.7 ± 3.4 mm(2) and 1.7 ± 1.8 mm(2) , respectively. Multivariate regression determined posterior vitreous detachment (PVD) and presence/development of intraretinal cysts (IRCs) as significant factors for total MA size (R(2) = 0.16, p = 0.02). Macular atrophy (MA) area outside the CNV border was best explained by the presence of reticular pseudodrusen (RPD) and IRC (R(2) = 0.24, p = 0.02). CONCLUSION A majority of patients show MA after long-term anti-VEGF treatment. Reticular pseudodrusen (RPD), IRC and PVD but not number of injections or treatment duration seem to be associated with the MA size.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate estimates of body mass in fossil taxa are fundamental to paleobiological reconstruction. Predictive equations derived from correlation with craniodental and body mass data in extant taxa are the most commonly used, but they can be unreliable for species whose morphology departs widely from that of living relatives. Estimates based on proximal limb-bone circumference data are more accurate but are inapplicable where postcranial remains are unknown. In this study we assess the efficacy of predicting body mass in Australian fossil marsupials by using an alternative correlate, endocranial volume. Body mass estimates for a species with highly unusual craniodental anatomy, the Pleistocene marsupial lion (Thylacoleo carnifex), fall within the range determined on the basis of proximal limb-bone circumference data, whereas estimates based on dental data are highly dubious. For all marsupial taxa considered, allometric relationships have small confidence intervals, and percent prediction errors are comparable to those of the best predictors using craniodental data. Although application is limited in some respects, this method may provide a useful means of estimating body mass for species with atypical craniodental or postcranial morphologies and taxa unrepresented by postcranial remains. A trend toward increased encephalization may constrain the method's predictive power with respect to many, but not all, placental clades.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Effective healthcare integration is underpinned by clinical information transfer that is timely, legible and relevant. The aim of this study was to describe and evaluate a method for best practice information exchange. This was achieved based on the generic Mater integration methodology. Using this model the Mater Health Services have increased effective community fax discharge from 34% in 1999 to 86% in 2002. These results were predicated on applied information technology excellence involving the development of the Mater Electronic Health Referral Summary and effective change management methodology, which included addressing issues around patient consent, engaging clinicians, provision of timely and appropriate education and training, executive leadership and commitment and adequate resourcing. The challenge in achieving best practice information transfer is not solely in the technology but also in implementing the change process and engaging clinicians. General practitioners valued the intervention highly. Hospital and community providers now have an inexpensive, effective product for critical information exchange in a timely and relevant manner, enhancing the quality and safety of patient care.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a novel method, called the transform likelihood ratio (TLR) method, for estimation of rare event probabilities with heavy-tailed distributions. Via a simple transformation ( change of variables) technique the TLR method reduces the original rare event probability estimation with heavy tail distributions to an equivalent one with light tail distributions. Once this transformation has been established we estimate the rare event probability via importance sampling, using the classical exponential change of measure or the standard likelihood ratio change of measure. In the latter case the importance sampling distribution is chosen from the same parametric family as the transformed distribution. We estimate the optimal parameter vector of the importance sampling distribution using the cross-entropy method. We prove the polynomial complexity of the TLR method for certain heavy-tailed models and demonstrate numerically its high efficiency for various heavy-tailed models previously thought to be intractable. We also show that the TLR method can be viewed as a universal tool in the sense that not only it provides a unified view for heavy-tailed simulation but also can be efficiently used in simulation with light-tailed distributions. We present extensive simulation results which support the efficiency of the TLR method.