8 resultados para machine tools and accessories
em Duke University
Resumo:
While bonobos and chimpanzees are both genetically and behaviorally very similar, they also differ in significant ways. Bonobos are more cautious and socially tolerant while chimpanzees are more dependent on extractive foraging, which requires tools. The similarities suggest the two species should be cognitively similar while the behavioral differences predict where the two species should differ cognitively. We compared both species on a wide range of cognitive problems testing their understanding of the physical and social world. Bonobos were more skilled at solving tasks related to theory of mind or an understanding of social causality, while chimpanzees were more skilled at tasks requiring the use of tools and an understanding of physical causality. These species differences support the role of ecological and socio-ecological pressures in shaping cognitive skills over relatively short periods of evolutionary time.
Resumo:
It is increasingly evident that evolutionary processes play a role in how ecological communities are assembled. However the extend to which evolution influences how plants respond to spatial and environmental gradients and interact with each other is less clear. In this dissertation I leverage evolutionary tools and thinking to understand how space and environment affect community composition and patterns of gene flow in a unique system of Atlantic rainforest and restinga (sandy coastal plains) habitats in Southeastern Brazil.
In chapter one I investigate how space and environment affect the population genetic structure and gene flow of Aechmea nudicaulis, a bromeliad species that co-occurs in forest and restinga habitats. I genotyped seven microsatellite loci and sequenced one chloroplast DNA region for individuals collected in 7 pairs of forest / restinga sites. Bayesian genetic clustering analyses show that populations of A. nudicaulis are geographically structured in northern and southern populations, a pattern consistent with broader scale phylogeographic dynamics of the Atlantic rainforest. On the other hand, explicit migration models based on the coalescent estimate that inter-habitat gene flow is less common than gene flow between populations in the same habitat type, despite their geographic discontinuity. I conclude that there is evidence for repeated colonization of the restingas from forest populations even though the steep environmental gradient between habitats is a stronger barrier to gene flow than geographic distance.
In chapter two I use data on 2800 individual plants finely mapped in a restinga plot and on first-year survival of 500 seedlings to understand the roles of phylogeny, functional traits and abiotic conditions in the spatial structuring of that community. I demonstrate that phylogeny is a poor predictor of functional traits in and that convergence in these traits is pervasive. In general, the community is not phylogenetically structured, with at best 14% of the plots deviating significantly from the null model. The functional traits SLA, leaf dry matter content (LDMC), and maximum height also showed no clear pattern of spatial structuring. On the other hand, leaf area is strongly overdispersed across all spatial scales. Although leaf area overdispersion would be generally taken as evidence of competition, I argue that interpretation is probably misleading. Finally, I show that seedling survival is dramatically increased when they grow shaded by an adult individual, suggesting that seedlings are being facilitated. Phylogenetic distance to their adult neighbor has no influence on rates of survival though. Taken together, these results indicate that phylogeny has very limited influence on the fine scale assembly of restinga communities.
Resumo:
© 2005-2012 IEEE.Within industrial automation systems, three-dimensional (3-D) vision provides very useful feedback information in autonomous operation of various manufacturing equipment (e.g., industrial robots, material handling devices, assembly systems, and machine tools). The hardware performance in contemporary 3-D scanning devices is suitable for online utilization. However, the bottleneck is the lack of real-time algorithms for recognition of geometric primitives (e.g., planes and natural quadrics) from a scanned point cloud. One of the most important and the most frequent geometric primitive in various engineering tasks is plane. In this paper, we propose a new fast one-pass algorithm for recognition (segmentation and fitting) of planar segments from a point cloud. To effectively segment planar regions, we exploit the orthonormality of certain wavelets to polynomial function, as well as their sensitivity to abrupt changes. After segmentation of planar regions, we estimate the parameters of corresponding planes using standard fitting procedures. For point cloud structuring, a z-buffer algorithm with mesh triangles representation in barycentric coordinates is employed. The proposed recognition method is tested and experimentally validated in several real-world case studies.
Resumo:
BACKGROUND/AIMS: The obesity epidemic has spread to young adults, and obesity is a significant risk factor for cardiovascular disease. The prominence and increasing functionality of mobile phones may provide an opportunity to deliver longitudinal and scalable weight management interventions in young adults. The aim of this article is to describe the design and development of the intervention tested in the Cell Phone Intervention for You study and to highlight the importance of adaptive intervention design that made it possible. The Cell Phone Intervention for You study was a National Heart, Lung, and Blood Institute-sponsored, controlled, 24-month randomized clinical trial comparing two active interventions to a usual-care control group. Participants were 365 overweight or obese (body mass index≥25 kg/m2) young adults. METHODS: Both active interventions were designed based on social cognitive theory and incorporated techniques for behavioral self-management and motivational enhancement. Initial intervention development occurred during a 1-year formative phase utilizing focus groups and iterative, participatory design. During the intervention testing, adaptive intervention design, where an intervention is updated or extended throughout a trial while assuring the delivery of exactly the same intervention to each cohort, was employed. The adaptive intervention design strategy distributed technical work and allowed introduction of novel components in phases intended to help promote and sustain participant engagement. Adaptive intervention design was made possible by exploiting the mobile phone's remote data capabilities so that adoption of particular application components could be continuously monitored and components subsequently added or updated remotely. RESULTS: The cell phone intervention was delivered almost entirely via cell phone and was always-present, proactive, and interactive-providing passive and active reminders, frequent opportunities for knowledge dissemination, and multiple tools for self-tracking and receiving tailored feedback. The intervention changed over 2 years to promote and sustain engagement. The personal coaching intervention, alternatively, was primarily personal coaching with trained coaches based on a proven intervention, enhanced with a mobile application, but where all interactions with the technology were participant-initiated. CONCLUSION: The complexity and length of the technology-based randomized clinical trial created challenges in engagement and technology adaptation, which were generally discovered using novel remote monitoring technology and addressed using the adaptive intervention design. Investigators should plan to develop tools and procedures that explicitly support continuous remote monitoring of interventions to support adaptive intervention design in long-term, technology-based studies, as well as developing the interventions themselves.
Resumo:
With increasing recognition of the roles RNA molecules and RNA/protein complexes play in an unexpected variety of biological processes, understanding of RNA structure-function relationships is of high current importance. To make clean biological interpretations from three-dimensional structures, it is imperative to have high-quality, accurate RNA crystal structures available, and the community has thoroughly embraced that goal. However, due to the many degrees of freedom inherent in RNA structure (especially for the backbone), it is a significant challenge to succeed in building accurate experimental models for RNA structures. This chapter describes the tools and techniques our research group and our collaborators have developed over the years to help RNA structural biologists both evaluate and achieve better accuracy. Expert analysis of large, high-resolution, quality-conscious RNA datasets provides the fundamental information that enables automated methods for robust and efficient error diagnosis in validating RNA structures at all resolutions. The even more crucial goal of correcting the diagnosed outliers has steadily developed toward highly effective, computationally based techniques. Automation enables solving complex issues in large RNA structures, but cannot circumvent the need for thoughtful examination of local details, and so we also provide some guidance for interpreting and acting on the results of current structure validation for RNA.
Resumo:
This work explores the use of statistical methods in describing and estimating camera poses, as well as the information feedback loop between camera pose and object detection. Surging development in robotics and computer vision has pushed the need for algorithms that infer, understand, and utilize information about the position and orientation of the sensor platforms when observing and/or interacting with their environment.
The first contribution of this thesis is the development of a set of statistical tools for representing and estimating the uncertainty in object poses. A distribution for representing the joint uncertainty over multiple object positions and orientations is described, called the mirrored normal-Bingham distribution. This distribution generalizes both the normal distribution in Euclidean space, and the Bingham distribution on the unit hypersphere. It is shown to inherit many of the convenient properties of these special cases: it is the maximum-entropy distribution with fixed second moment, and there is a generalized Laplace approximation whose result is the mirrored normal-Bingham distribution. This distribution and approximation method are demonstrated by deriving the analytical approximation to the wrapped-normal distribution. Further, it is shown how these tools can be used to represent the uncertainty in the result of a bundle adjustment problem.
Another application of these methods is illustrated as part of a novel camera pose estimation algorithm based on object detections. The autocalibration task is formulated as a bundle adjustment problem using prior distributions over the 3D points to enforce the objects' structure and their relationship with the scene geometry. This framework is very flexible and enables the use of off-the-shelf computational tools to solve specialized autocalibration problems. Its performance is evaluated using a pedestrian detector to provide head and foot location observations, and it proves much faster and potentially more accurate than existing methods.
Finally, the information feedback loop between object detection and camera pose estimation is closed by utilizing camera pose information to improve object detection in scenarios with significant perspective warping. Methods are presented that allow the inverse perspective mapping traditionally applied to images to be applied instead to features computed from those images. For the special case of HOG-like features, which are used by many modern object detection systems, these methods are shown to provide substantial performance benefits over unadapted detectors while achieving real-time frame rates, orders of magnitude faster than comparable image warping methods.
The statistical tools and algorithms presented here are especially promising for mobile cameras, providing the ability to autocalibrate and adapt to the camera pose in real time. In addition, these methods have wide-ranging potential applications in diverse areas of computer vision, robotics, and imaging.
Resumo:
We examined facilitators and barriers to adoption of genomic services for colorectal care, one of the first genomic medicine applications, within the Veterans Health Administration to shed light on areas for practice change. We conducted semi-structured interviews with 58 clinicians to understand use of the following genomic services for colorectal care: family health history documentation, molecular and genetic testing, and genetic counseling. Data collection and analysis were informed by two conceptual frameworks, the Greenhalgh Diffusion of Innovation and Andersen Behavioral Model, to allow for concurrent examination of both access and innovation factors. Specialists were more likely than primary care clinicians to obtain family history to investigate hereditary colorectal cancer (CRC), but with limited detail; clinicians suggested templates to facilitate retrieval and documentation of family history according to guidelines. Clinicians identified advantage of molecular tumor analysis prior to genetic testing, but tumor testing was infrequently used due to perceived low disease burden. Support from genetic counselors was regarded as facilitative for considering hereditary basis of CRC diagnosis, but there was variability in awareness of and access to this expertise. Our data suggest the need for tools and policies to establish and disseminate well-defined processes for accessing services and adhering to guidelines.
Resumo:
While molecular and cellular processes are often modeled as stochastic processes, such as Brownian motion, chemical reaction networks and gene regulatory networks, there are few attempts to program a molecular-scale process to physically implement stochastic processes. DNA has been used as a substrate for programming molecular interactions, but its applications are restricted to deterministic functions and unfavorable properties such as slow processing, thermal annealing, aqueous solvents and difficult readout limit them to proof-of-concept purposes. To date, whether there exists a molecular process that can be programmed to implement stochastic processes for practical applications remains unknown.
In this dissertation, a fully specified Resonance Energy Transfer (RET) network between chromophores is accurately fabricated via DNA self-assembly, and the exciton dynamics in the RET network physically implement a stochastic process, specifically a continuous-time Markov chain (CTMC), which has a direct mapping to the physical geometry of the chromophore network. Excited by a light source, a RET network generates random samples in the temporal domain in the form of fluorescence photons which can be detected by a photon detector. The intrinsic sampling distribution of a RET network is derived as a phase-type distribution configured by its CTMC model. The conclusion is that the exciton dynamics in a RET network implement a general and important class of stochastic processes that can be directly and accurately programmed and used for practical applications of photonics and optoelectronics. Different approaches to using RET networks exist with vast potential applications. As an entropy source that can directly generate samples from virtually arbitrary distributions, RET networks can benefit applications that rely on generating random samples such as 1) fluorescent taggants and 2) stochastic computing.
By using RET networks between chromophores to implement fluorescent taggants with temporally coded signatures, the taggant design is not constrained by resolvable dyes and has a significantly larger coding capacity than spectrally or lifetime coded fluorescent taggants. Meanwhile, the taggant detection process becomes highly efficient, and the Maximum Likelihood Estimation (MLE) based taggant identification guarantees high accuracy even with only a few hundred detected photons.
Meanwhile, RET-based sampling units (RSU) can be constructed to accelerate probabilistic algorithms for wide applications in machine learning and data analytics. Because probabilistic algorithms often rely on iteratively sampling from parameterized distributions, they can be inefficient in practice on the deterministic hardware traditional computers use, especially for high-dimensional and complex problems. As an efficient universal sampling unit, the proposed RSU can be integrated into a processor / GPU as specialized functional units or organized as a discrete accelerator to bring substantial speedups and power savings.