938 resultados para PM3 semi-empirical method
Resumo:
This study investigated the feasibility of using qualitative methods to provide empirical documentation of the long-term qualitative change in the life course trajectories of “at risk” youth in a school based positive youth development program (the Changing Lives Program—CLP). This work draws from life course theory for a developmental framework and from recent advances in the use of qualitative methods in general and a grounded theory approach in particular. Grounded theory provided a methodological framework for conceptualizing the use of qualitative methods for assessing qualitative life change. The study investigated the feasibility of using the Possible Selves Questionnaire-Qualitative Extension (PSQ-QE) for evaluating the impact of the program on qualitative change in participants' life trajectory relative to a non-intervention control group. Integrated Qualitative/Quantitative Data Analytic Strategies (IQ-DAS) that we have been developing a part of our program of research provided the data analytic framework for the study. ^ Change was evaluated in 85 at risk high school students in CLP high school counseling groups over three assessment periods (pre, post, and follow-up), and a non-intervention control group of 23 students over two assessment periods (pre and post). Intervention gains and maintenance and the extent to which these patterns of change were moderated by gender and ethnicity were evaluated using a mixed design Repeated Measures Multivariate Analysis of Variance (RMANOVA) in which Time (pre, post) was the within (repeated) factor and Condition, Gender, and Ethnicity the between group factors. The trends for the direction of qualitative change were positive from pre to post and maintained at the year-end follow-up. More important, the 3-way interaction for Time x Gender x Ethnicity was significant, Roy's Θ =. 205, F(2, 37) = 3.80, p <.032, indicating that the overall pattern of positive change was significantly moderated by gender and ethnicity. Thus, the findings also provided preliminary evidence for a positive impact of the youth development program on long-term change in life course trajectory, and were suggestive with respect to the issue of amenability to treatment, i.e., the identification of subgroups of individuals in a target population who are likely to be the most amenable or responsive to a treatment. ^
Resumo:
There is an increasing demand for DNA analysis because of the sensitivity of the method and the ability to uniquely identify and distinguish individuals with a high degree of certainty. But this demand has led to huge backlogs in evidence lockers since the current DNA extraction protocols require long processing time. The DNA analysis procedure becomes more complicated when analyzing sexual assault casework samples where the evidence contains more than one contributor. Additional processing to separate different cell types in order to simplify the final data interpretation further contributes to the existing cumbersome protocols. The goal of the present project is to develop a rapid and efficient extraction method that permits selective digestion of mixtures. ^ Selective recovery of male DNA was achieved with as little as 15 minutes lysis time upon exposure to high pressure under alkaline conditions. Pressure cycling technology (PCT) is carried out in a barocycler that has a small footprint and is semi-automated. Typically less than 10% male DNA is recovered using the standard extraction protocol for rape kits, almost seven times more male DNA was recovered from swabs using this novel method. Various parameters including instrument setting and buffer composition were optimized to achieve selective recovery of sperm DNA. Some developmental validation studies were also done to determine the efficiency of this method in processing samples exposed to various conditions that can affect the quality of the extraction and the final DNA profile. ^ Easy to use interface, minimal manual interference and the ability to achieve high yields with simple reagents in a relatively short time make this an ideal method for potential application in analyzing sexual assault samples.^
Resumo:
Modern IT infrastructures are constructed by large scale computing systems and administered by IT service providers. Manually maintaining such large computing systems is costly and inefficient. Service providers often seek automatic or semi-automatic methodologies of detecting and resolving system issues to improve their service quality and efficiency. This dissertation investigates several data-driven approaches for assisting service providers in achieving this goal. The detailed problems studied by these approaches can be categorized into the three aspects in the service workflow: 1) preprocessing raw textual system logs to structural events; 2) refining monitoring configurations for eliminating false positives and false negatives; 3) improving the efficiency of system diagnosis on detected alerts. Solving these problems usually requires a huge amount of domain knowledge about the particular computing systems. The approaches investigated by this dissertation are developed based on event mining algorithms, which are able to automatically derive part of that knowledge from the historical system logs, events and tickets. ^ In particular, two textual clustering algorithms are developed for converting raw textual logs into system events. For refining the monitoring configuration, a rule based alert prediction algorithm is proposed for eliminating false alerts (false positives) without losing any real alert and a textual classification method is applied to identify the missing alerts (false negatives) from manual incident tickets. For system diagnosis, this dissertation presents an efficient algorithm for discovering the temporal dependencies between system events with corresponding time lags, which can help the administrators to determine the redundancies of deployed monitoring situations and dependencies of system components. To improve the efficiency of incident ticket resolving, several KNN-based algorithms that recommend relevant historical tickets with resolutions for incoming tickets are investigated. Finally, this dissertation offers a novel algorithm for searching similar textual event segments over large system logs that assists administrators to locate similar system behaviors in the logs. Extensive empirical evaluation on system logs, events and tickets from real IT infrastructures demonstrates the effectiveness and efficiency of the proposed approaches.^
Resumo:
In 2001, a weather and climate monitoring network was established along the temperature and aridity gradient between the sub-humid Moroccan High Atlas Mountains and the former end lake of the Middle Drâa in a pre-Saharan environment. The highest Automated Weather Stations (AWS) was installed just below the M'Goun summit at 3850 m, the lowest station Lac Iriki was at 450 m. This network of 13 AWS stations was funded and maintained by the German IMPETUS (BMBF Grant 01LW06001A, North Rhine-Westphalia Grant 313-21200200) project and since 2011 five stations were further maintained by the GERMAN DFG Fennec project (FI 786/3-1), this way some stations of the AWS network provided data for almost 12 years from 2001-2012. Standard meteorological variables such as temperature, humidity, and wind were measured at an altitude of 2 m above ground. Other meteorological variables comprise precipitation, station pressure, solar irradiance, soil temperature at different depths and for high mountain station snow water equivalent. The stations produced data summaries for 5-minute-precipitation-data, 10- or 15-minute-data and a daily summary of all other variables. This network is a unique resource of multi-year weather data in the remote semi-arid to arid mountain region of the Saharan flank of the Atlas Mountains. The network is described in Schulz et al. (2010) and its further continuation until 2012 is briefly discussed in Redl et al. (2015, doi:10.1175/MWR-D-15-0223.1) and Redl et al. (2016, doi:10.1002/2015JD024443).
Resumo:
In longitudinal data analysis, our primary interest is in the regression parameters for the marginal expectations of the longitudinal responses; the longitudinal correlation parameters are of secondary interest. The joint likelihood function for longitudinal data is challenging, particularly for correlated discrete outcome data. Marginal modeling approaches such as generalized estimating equations (GEEs) have received much attention in the context of longitudinal regression. These methods are based on the estimates of the first two moments of the data and the working correlation structure. The confidence regions and hypothesis tests are based on the asymptotic normality. The methods are sensitive to misspecification of the variance function and the working correlation structure. Because of such misspecifications, the estimates can be inefficient and inconsistent, and inference may give incorrect results. To overcome this problem, we propose an empirical likelihood (EL) procedure based on a set of estimating equations for the parameter of interest and discuss its characteristics and asymptotic properties. We also provide an algorithm based on EL principles for the estimation of the regression parameters and the construction of a confidence region for the parameter of interest. We extend our approach to variable selection for highdimensional longitudinal data with many covariates. In this situation it is necessary to identify a submodel that adequately represents the data. Including redundant variables may impact the model’s accuracy and efficiency for inference. We propose a penalized empirical likelihood (PEL) variable selection based on GEEs; the variable selection and the estimation of the coefficients are carried out simultaneously. We discuss its characteristics and asymptotic properties, and present an algorithm for optimizing PEL. Simulation studies show that when the model assumptions are correct, our method performs as well as existing methods, and when the model is misspecified, it has clear advantages. We have applied the method to two case examples.
Resumo:
During a machining process, cutting parameters must be taken into account, since depending on them the cutting edge starts to wear out to the point that tool can fail and needs to be change, which increases the cost and time of production. Since wear is a negative phenomenon on the cutting tool, due to the fact that tool life is reduced, it is important to optimize the cutting variables to be used during the machining process, in order to increase tool life. This research is focused on the influence of cutting parameters such as cutting speed, feed per tooth and axial depth of cut on tool wear during a face milling operation. The Taguchi method is applied in this study, since it uses a special design of orthogonal array to study the entire parameters space, with only few numbers of experiments. Also a relationship between tool wear and the cutting parameters is presented. For the studies, a martensitic 416 stainless steel was selected, due to the importance of this material in the machining of valve parts and pump shafts. Copyright © 2009 by ASME.
Resumo:
The position of a stationary target can be determined using triangulation in combination with time of arrival measurements at several sensors. In urban environments, none-line-of-sight (NLOS) propagation leads to biased time estimation and thus to inaccurate position estimates. Here, a semi-parametric approach is proposed to mitigate the effects of NLOS propagation. The degree of contamination by NLOS components in the observations, which result in asymmetric noise statistics, is determined and incorporated into the estimator. The proposed method is adequate for environments where the NLOS error plays a dominant role and outperforms previous approaches that assume a symmetric noise statistic.
Resumo:
An aerosol time-of-flight mass spectrometer (ATOFMS) was deployed for the measurement of the size resolved chemical composition of single particles at a site in Cork Harbour, Ireland for three weeks in August 2008. The ATOFMS was co-located with a suite of semi-continuous instrumentation for the measurement of particle number, elemental carbon (EC), organic carbon (OC), sulfate and particulate matter smaller than 2.5 μm in diameter (PM2.5). The temporality of the ambient ATOFMS particle classes was subsequently used in conjunction with the semi-continuous measurements to apportion PM2.5 mass using positive matrix factorisation. The synergy of the single particle classification procedure and positive matrix factorisation allowed for the identification of six factors, corresponding to vehicular traffic, marine, long-range transport, various combustion, domestic solid fuel combustion and shipping traffic with estimated contributions to the measured PM2.5 mass of 23%, 14%, 13%, 11%, 5% and 1.5% respectively. Shipping traffic was found to contribute 18% of the measured particle number (20–600 nm mobility diameter), and thus may have important implications for human health considering the size and composition of ship exhaust particles. The positive matrix factorisation procedure enabled a more refined interpretation of the single particle results by providing source contributions to PM2.5 mass, while the single particle data enabled the identification of additional factors not possible with typical semi-continuous measurements, including local shipping traffic.
Resumo:
In this study, we developed and improved the numerical mode matching (NMM) method which has previously been shown to be a fast and robust semi-analytical solver to investigate the propagation of electromagnetic (EM) waves in an isotropic layered medium. The applicable models, such as cylindrical waveguide, optical fiber, and borehole with earth geological formation, are generally modeled as an axisymmetric structure which is an orthogonal-plano-cylindrically layered (OPCL) medium consisting of materials stratified planarly and layered concentrically in the orthogonal directions.
In this report, several important improvements have been made to extend applications of this efficient solver to the anisotropic OCPL medium. The formulas for anisotropic media with three different diagonal elements in the cylindrical coordinate system are deduced to expand its application to more general materials. The perfectly matched layer (PML) is incorporated along the radial direction as an absorbing boundary condition (ABC) to make the NMM method more accurate and efficient for wave diffusion problems in unbounded media and applicable to scattering problems with lossless media. We manipulate the weak form of Maxwell's equations and impose the correct boundary conditions at the cylindrical axis to solve the singularity problem which is ignored by all previous researchers. The spectral element method (SEM) is introduced to more efficiently compute the eigenmodes of higher accuracy with less unknowns, achieving a faster mode matching procedure between different horizontal layers. We also prove the relationship of the field between opposite mode indices for different types of excitations, which can reduce the computational time by half. The formulas for computing EM fields excited by an electric or magnetic dipole located at any position with an arbitrary orientation are deduced. And the excitation are generalized to line and surface current sources which can extend the application of NMM to the simulations of controlled source electromagnetic techniques. Numerical simulations have demonstrated the efficiency and accuracy of this method.
Finally, the improved numerical mode matching (NMM) method is introduced to efficiently compute the electromagnetic response of the induction tool from orthogonal transverse hydraulic fractures in open or cased boreholes in hydrocarbon exploration. The hydraulic fracture is modeled as a slim circular disk which is symmetric with respect to the borehole axis and filled with electrically conductive or magnetic proppant. The NMM solver is first validated by comparing the normalized secondary field with experimental measurements and a commercial software. Then we analyze quantitatively the induction response sensitivity of the fracture with different parameters, such as length, conductivity and permeability of the filled proppant, to evaluate the effectiveness of the induction logging tool for fracture detection and mapping. Casings with different thicknesses, conductivities and permeabilities are modeled together with the fractures in boreholes to investigate their effects for fracture detection. It reveals that the normalized secondary field will not be weakened at low frequencies, ensuring the induction tool is still applicable for fracture detection, though the attenuation of electromagnetic field through the casing is significant. A hybrid approach combining the NMM method and BCGS-FFT solver based integral equation has been proposed to efficiently simulate the open or cased borehole with tilted fractures which is a non-axisymmetric model.
Resumo:
OBJECTIVE: To demonstrate the application of causal inference methods to observational data in the obstetrics and gynecology field, particularly causal modeling and semi-parametric estimation. BACKGROUND: Human immunodeficiency virus (HIV)-positive women are at increased risk for cervical cancer and its treatable precursors. Determining whether potential risk factors such as hormonal contraception are true causes is critical for informing public health strategies as longevity increases among HIV-positive women in developing countries. METHODS: We developed a causal model of the factors related to combined oral contraceptive (COC) use and cervical intraepithelial neoplasia 2 or greater (CIN2+) and modified the model to fit the observed data, drawn from women in a cervical cancer screening program at HIV clinics in Kenya. Assumptions required for substantiation of a causal relationship were assessed. We estimated the population-level association using semi-parametric methods: g-computation, inverse probability of treatment weighting, and targeted maximum likelihood estimation. RESULTS: We identified 2 plausible causal paths from COC use to CIN2+: via HPV infection and via increased disease progression. Study data enabled estimation of the latter only with strong assumptions of no unmeasured confounding. Of 2,519 women under 50 screened per protocol, 219 (8.7%) were diagnosed with CIN2+. Marginal modeling suggested a 2.9% (95% confidence interval 0.1%, 6.9%) increase in prevalence of CIN2+ if all women under 50 were exposed to COC; the significance of this association was sensitive to method of estimation and exposure misclassification. CONCLUSION: Use of causal modeling enabled clear representation of the causal relationship of interest and the assumptions required to estimate that relationship from the observed data. Semi-parametric estimation methods provided flexibility and reduced reliance on correct model form. Although selected results suggest an increased prevalence of CIN2+ associated with COC, evidence is insufficient to conclude causality. Priority areas for future studies to better satisfy causal criteria are identified.
Resumo:
Prior to the Civil Rights Movement, fewer than 50 Black judges had been elected or appointed to the judiciary. As of August 2015, there are over 1,000 Black state and federal judges. As the number of black judges has increased, one question arises: have American courts been altered purely by this substantial increase? One expectation—and, at times, a prediction—behind the increased descriptive representation of Black judges is that their mere presence would alter the judiciary. It was supposed that these judges would substantively represent Black interests in the decisions they made. In other words, it was suspected, and predicted, that Blacks in the judiciary would enhance equality and justice by being aware of, responsive to, and advocating for African Americans. This theory about the likely role of Black judges derives from theoretical work on political representation and racial group consciousness, and empirical studies of Black elite behavior in other political institutions.
Despite such predictions, there is no corresponding scholarly consensus regarding whether Black judges possess a racial group consciousness and have racially distinctive judicial behavior. Therefore, the theory undergirding the demand for increased diversification, as a means to transform the judiciary, remains unsubstantiated. This is precisely where this project, “They’re There, Now What?: The Identities, Behavior, and Perceptions of Black Judges,” seeks to intervene in and explore, if not settle, the matter of whether black judges possess a racial group consciousness and exhibit racially-distinctive judicial behavior. It addresses a set of interrelated questions relevant to understanding whether we can view Black judges as representatives in ways that are similar to how we view other Black political officials. I examine these questions using a multi-method approach. For my analyses, I draw on diverse materials: the published biographies of every Black judge appointed to the federal bench, a survey experiment with a nationally-representative adult sample, and semi-structured interviews with 30 Black judges.
This research, which engages with scholarship on representation, group consciousness, judicial behavior, and candidate perceptions, offers new insights into the lives, perceptions, and behavior of Black judges, as well as the manifestations of Black substantive representation in the judiciary. My dissertation argues that, despite the general reluctance to use the term “representation” when referring to judges, we can consider Black judges as representatives. Black judges behave as substantive representatives by (1) sharing and understanding the experience, history, and perspectives of Black Americans, (2) challenging language, persons, policies, and laws they feel negatively affect, or violate the rights and liberties of, African Americans, (3) respecting African American litigants, and (4) ensuring the rights of African Americans are protected and the needs of black Americans are being met.
Only through research that considers the perspectives, identities, perceptions, and behavior of Black judges will we arrive at a more comprehensive understanding of the importance of racial diversity in the courts. As this project finds, a link between descriptive representation and substantive representation can, and frequently does exist within the judicial context. Such a link is significant given that Blacks’ liberty and justice through the American legal system continues to be subject to those who exercise judicial power. This dissertation has implications for the discourse surrounding the need for increased descriptive and substantive representation of Blacks in the judiciary, and the factors that affect representation in the justice system.
Resumo:
Alors que les activités anthropiques font basculer de nombreux écosystèmes vers des régimes fonctionnels différents, la résilience des systèmes socio-écologiques devient un problème pressant. Des acteurs locaux, impliqués dans une grande diversité de groupes — allant d’initiatives locales et indépendantes à de grandes institutions formelles — peuvent agir sur ces questions en collaborant au développement, à la promotion ou à l’implantation de pratiques plus en accord avec ce que l’environnement peut fournir. De ces collaborations répétées émergent des réseaux complexes, et il a été montré que la topologie de ces réseaux peut améliorer la résilience des systèmes socio-écologiques (SSÉ) auxquels ils participent. La topologie des réseaux d’acteurs favorisant la résilience de leur SSÉ est caractérisée par une combinaison de plusieurs facteurs : la structure doit être modulaire afin d’aider les différents groupes à développer et proposer des solutions à la fois plus innovantes (en réduisant l’homogénéisation du réseau), et plus proches de leurs intérêts propres ; elle doit être bien connectée et facilement synchronisable afin de faciliter les consensus, d’augmenter le capital social, ainsi que la capacité d’apprentissage ; enfin, elle doit être robuste, afin d’éviter que les deux premières caractéristiques ne souffrent du retrait volontaire ou de la mise à l’écart de certains acteurs. Ces caractéristiques, qui sont relativement intuitives à la fois conceptuellement et dans leur application mathématique, sont souvent employées séparément pour analyser les qualités structurales de réseaux d’acteurs empiriques. Cependant, certaines sont, par nature, incompatibles entre elles. Par exemple, le degré de modularité d’un réseau ne peut pas augmenter au même rythme que sa connectivité, et cette dernière ne peut pas être améliorée tout en améliorant sa robustesse. Cet obstacle rend difficile la création d’une mesure globale, car le niveau auquel le réseau des acteurs contribue à améliorer la résilience du SSÉ ne peut pas être la simple addition des caractéristiques citées, mais plutôt le résultat d’un compromis subtil entre celles-ci. Le travail présenté ici a pour objectifs (1), d’explorer les compromis entre ces caractéristiques ; (2) de proposer une mesure du degré auquel un réseau empirique d’acteurs contribue à la résilience de son SSÉ ; et (3) d’analyser un réseau empirique à la lumière, entre autres, de ces qualités structurales. Cette thèse s’articule autour d’une introduction et de quatre chapitres numérotés de 2 à 5. Le chapitre 2 est une revue de la littérature sur la résilience des SSÉ. Il identifie une série de caractéristiques structurales (ainsi que les mesures de réseaux qui leur correspondent) liées à l’amélioration de la résilience dans les SSÉ. Le chapitre 3 est une étude de cas sur la péninsule d’Eyre, une région rurale d’Australie-Méridionale où l’occupation du sol, ainsi que les changements climatiques, contribuent à l’érosion de la biodiversité. Pour cette étude de cas, des travaux de terrain ont été effectués en 2010 et 2011 durant lesquels une série d’entrevues a permis de créer une liste des acteurs de la cogestion de la biodiversité sur la péninsule. Les données collectées ont été utilisées pour le développement d’un questionnaire en ligne permettant de documenter les interactions entre ces acteurs. Ces deux étapes ont permis la reconstitution d’un réseau pondéré et dirigé de 129 acteurs individuels et 1180 relations. Le chapitre 4 décrit une méthodologie pour mesurer le degré auquel un réseau d’acteurs participe à la résilience du SSÉ dans lequel il est inclus. La méthode s’articule en deux étapes : premièrement, un algorithme d’optimisation (recuit simulé) est utilisé pour fabriquer un archétype semi-aléatoire correspondant à un compromis entre des niveaux élevés de modularité, de connectivité et de robustesse. Deuxièmement, un réseau empirique (comme celui de la péninsule d’Eyre) est comparé au réseau archétypique par le biais d’une mesure de distance structurelle. Plus la distance est courte, et plus le réseau empirique est proche de sa configuration optimale. La cinquième et dernier chapitre est une amélioration de l’algorithme de recuit simulé utilisé dans le chapitre 4. Comme il est d’usage pour ce genre d’algorithmes, le recuit simulé utilisé projetait les dimensions du problème multiobjectif dans une seule dimension (sous la forme d’une moyenne pondérée). Si cette technique donne de très bons résultats ponctuellement, elle n’autorise la production que d’une seule solution parmi la multitude de compromis possibles entre les différents objectifs. Afin de mieux explorer ces compromis, nous proposons un algorithme de recuit simulé multiobjectifs qui, plutôt que d’optimiser une seule solution, optimise une surface multidimensionnelle de solutions. Cette étude, qui se concentre sur la partie sociale des systèmes socio-écologiques, améliore notre compréhension des structures actorielles qui contribuent à la résilience des SSÉ. Elle montre que si certaines caractéristiques profitables à la résilience sont incompatibles (modularité et connectivité, ou — dans une moindre mesure — connectivité et robustesse), d’autres sont plus facilement conciliables (connectivité et synchronisabilité, ou — dans une moindre mesure — modularité et robustesse). Elle fournit également une méthode intuitive pour mesurer quantitativement des réseaux d’acteurs empiriques, et ouvre ainsi la voie vers, par exemple, des comparaisons d’études de cas, ou des suivis — dans le temps — de réseaux d’acteurs. De plus, cette thèse inclut une étude de cas qui fait la lumière sur l’importance de certains groupes institutionnels pour la coordination des collaborations et des échanges de connaissances entre des acteurs aux intérêts potentiellement divergents.
Resumo:
There is an increasing demand for DNA analysis because of the sensitivity of the method and the ability to uniquely identify and distinguish individuals with a high degree of certainty. But this demand has led to huge backlogs in evidence lockers since the current DNA extraction protocols require long processing time. The DNA analysis procedure becomes more complicated when analyzing sexual assault casework samples where the evidence contains more than one contributor. Additional processing to separate different cell types in order to simplify the final data interpretation further contributes to the existing cumbersome protocols. The goal of the present project is to develop a rapid and efficient extraction method that permits selective digestion of mixtures. Selective recovery of male DNA was achieved with as little as 15 minutes lysis time upon exposure to high pressure under alkaline conditions. Pressure cycling technology (PCT) is carried out in a barocycler that has a small footprint and is semi-automated. Typically less than 10% male DNA is recovered using the standard extraction protocol for rape kits, almost seven times more male DNA was recovered from swabs using this novel method. Various parameters including instrument setting and buffer composition were optimized to achieve selective recovery of sperm DNA. Some developmental validation studies were also done to determine the efficiency of this method in processing samples exposed to various conditions that can affect the quality of the extraction and the final DNA profile. Easy to use interface, minimal manual interference and the ability to achieve high yields with simple reagents in a relatively short time make this an ideal method for potential application in analyzing sexual assault samples.
Resumo:
The convex hull describes the extent or shape of a set of data and is used ubiquitously in computational geometry. Common algorithms to construct the convex hull on a finite set of n points (x,y) range from O(nlogn) time to O(n) time. However, it is often the case that a heuristic procedure is applied to reduce the original set of n points to a set of s < n points which contains the hull and so accelerates the final hull finding procedure. We present an algorithm to precondition data before building a 2D convex hull with integer coordinates, with three distinct advantages. First, for all practical purposes, it is linear; second, no explicit sorting of data is required and third, the reduced set of s points is constructed such that it forms an ordered set that can be directly pipelined into an O(n) time convex hull algorithm. Under these criteria a fast (or O(n)) pre-conditioner in principle creates a fast convex hull (approximately O(n)) for an arbitrary set of points. The paper empirically evaluates and quantifies the acceleration generated by the method against the most common convex hull algorithms. An extra acceleration of at least four times when compared to previous existing preconditioning methods is found from experiments on a dataset.
Resumo:
The convex hull describes the extent or shape of a set of data and is used ubiquitously in computational geometry. Common algorithms to construct the convex hull on a finite set of n points (x,y) range from O(nlogn) time to O(n) time. However, it is often the case that a heuristic procedure is applied to reduce the original set of n points to a set of s < n points which contains the hull and so accelerates the final hull finding procedure. We present an algorithm to precondition data before building a 2D convex hull with integer coordinates, with three distinct advantages. First, for all practical purposes, it is linear; second, no explicit sorting of data is required and third, the reduced set of s points is constructed such that it forms an ordered set that can be directly pipelined into an O(n) time convex hull algorithm. Under these criteria a fast (or O(n)) pre-conditioner in principle creates a fast convex hull (approximately O(n)) for an arbitrary set of points. The paper empirically evaluates and quantifies the acceleration generated by the method against the most common convex hull algorithms. An extra acceleration of at least four times when compared to previous existing preconditioning methods is found from experiments on a dataset.