980 resultados para Point cloud processing


Relevância:

20.00% 20.00%

Publicador:

Resumo:

To estimate the mid-point of an open-ended income category and to assess the impact of two equivalence scales on income-health associations. Data were obtained from the 2010 Brazilian Oral Health Survey ( Pesquisa Nacional de Saúde Bucal – SBBrasil 2010). Income was converted from categorical to two continuous variables ( per capita and equivalized) for each mid-point. The median mid-point was R$ 14,523.50 and the mean, R$ 24,507.10. When per capita income was applied, 53% of the population were below the poverty line, compared with 15% with equivalized income. The magnitude of income-health associations was similar for continuous income, but categorized equivalized income tended to decrease the strength of association.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cooperating objects (COs) is a recently coined term used to signify the convergence of classical embedded computer systems, wireless sensor networks and robotics and control. We present essential elements of a reference architecture for scalable data processing for the CO paradigm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE To propose a cut-off for the World Health Organization Quality of Life-Bref (WHOQOL-bref) as a predictor of quality of life in older adults. METHODS Cross-sectional study with 391 older adults registered in the Northwest Health District in Belo Horizonte, MG, Southeastern Brazil, between October 8, 2010 and May 23, 2011. The older adults’ quality of life was measured using the WHOQOL-bref. The analysis was rationalized by outlining two extreme and simultaneous groups according to perceived quality of life and satisfaction with health (quality of life good/satisfactory – good or very good self-reported quality of life and being satisfied or very satisfied with health – G5; and poor/very poor quality of life – poor or very poor self-reported quality of life and feeling dissatisfied or very dissatisfied with health – G6). A Receiver-Operating Characteristic curve (ROC) was created to assess the diagnostic ability of different cut-off points of the WHOQOL-bref. RESULTS ROC curve analysis indicated a critical value 60 as the optimal cut-off point for assessing perceived quality of life and satisfaction with health. The area under the curve was 0.758, with a sensitivity of 76.8% and specificity of 63.8% for a cut-off of ≥ 60 for overall quality of life (G5) and sensitivity 95.0% and specificity of 54.4% for a cut-off of < 60 for overall quality of life (G6). CONCLUSIONS Diagnostic interpretation of the ROC curve revealed that cut-off < 60 for overall quality of life obtained excellent sensitivity and negative predictive value for tracking older adults with probable worse quality of life and dissatisfied with health.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a micro power light energy harvesting system for indoor environments. Light energy is collected by amorphous silicon photovoltaic (a-Si:H PV) cells, processed by a switched capacitor (SC) voltage doubler circuit with maximum power point tracking (MPPT), and finally stored in a large capacitor. The MPPT fractional open circuit voltage (V-OC) technique is implemented by an asynchronous state machine (ASM) that creates and dynamically adjusts the clock frequency of the step-up SC circuit, matching the input impedance of the SC circuit to the maximum power point condition of the PV cells. The ASM has a separate local power supply to make it robust against load variations. In order to reduce the area occupied by the SC circuit, while maintaining an acceptable efficiency value, the SC circuit uses MOSFET capacitors with a charge sharing scheme for the bottom plate parasitic capacitors. The circuit occupies an area of 0.31 mm(2) in a 130 nm CMOS technology. The system was designed in order to work under realistic indoor light intensities. Experimental results show that the proposed system, using PV cells with an area of 14 cm(2), is capable of starting-up from a 0 V condition, with an irradiance of only 0.32 W/m(2). After starting-up, the system requires an irradiance of only 0.18 W/m(2) (18 mu W/cm(2)) to remain operating. The ASM circuit can operate correctly using a local power supply voltage of 453 mV, dissipating only 0.085 mu W. These values are, to the best of the authors' knowledge, the lowest reported in the literature. The maximum efficiency of the SC converter is 70.3 % for an input power of 48 mu W, which is comparable with reported values from circuits operating at similar power levels.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Workflows have been successfully applied to express the decomposition of complex scientific applications. This has motivated many initiatives that have been developing scientific workflow tools. However the existing tools still lack adequate support to important aspects namely, decoupling the enactment engine from workflow tasks specification, decentralizing the control of workflow activities, and allowing their tasks to run autonomous in distributed infrastructures, for instance on Clouds. Furthermore many workflow tools only support the execution of Direct Acyclic Graphs (DAG) without the concept of iterations, where activities are executed millions of iterations during long periods of time and supporting dynamic workflow reconfigurations after certain iteration. We present the AWARD (Autonomic Workflow Activities Reconfigurable and Dynamic) model of computation, based on the Process Networks model, where the workflow activities (AWA) are autonomic processes with independent control that can run in parallel on distributed infrastructures, e. g. on Clouds. Each AWA executes a Task developed as a Java class that implements a generic interface allowing end-users to code their applications without concerns for low-level details. The data-driven coordination of AWA interactions is based on a shared tuple space that also enables support to dynamic workflow reconfiguration and monitoring of the execution of workflows. We describe how AWARD supports dynamic reconfiguration and discuss typical workflow reconfiguration scenarios. For evaluation we describe experimental results of AWARD workflow executions in several application scenarios, mapped to a small dedicated cluster and the Amazon (Elastic Computing EC2) Cloud.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a single precision floating point arithmetic unit with support for multiplication, addition, fused multiply-add, reciprocal, square-root and inverse squareroot with high-performance and low resource usage. The design uses a piecewise 2nd order polynomial approximation to implement reciprocal, square-root and inverse square-root. The unit can be configured with any number of operations and is capable to calculate any function with a throughput of one operation per cycle. The floatingpoint multiplier of the unit is also used to implement the polynomial approximation and the fused multiply-add operation. We have compared our implementation with other state-of-the-art proposals, including the Xilinx Core-Gen operators, and conclude that the approach has a high relative performance/area efficiency. © 2014 Technical University of Munich (TUM).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In global scientific experiments with collaborative scenarios involving multinational teams there are big challenges related to data access, namely data movements are precluded to other regions or Clouds due to the constraints on latency costs, data privacy and data ownership. Furthermore, each site is processing local data sets using specialized algorithms and producing intermediate results that are helpful as inputs to applications running on remote sites. This paper shows how to model such collaborative scenarios as a scientific workflow implemented with AWARD (Autonomic Workflow Activities Reconfigurable and Dynamic), a decentralized framework offering a feasible solution to run the workflow activities on distributed data centers in different regions without the need of large data movements. The AWARD workflow activities are independently monitored and dynamically reconfigured and steering by different users, namely by hot-swapping the algorithms to enhance the computation results or by changing the workflow structure to support feedback dependencies where an activity receives feedback output from a successor activity. A real implementation of one practical scenario and its execution on multiple data centers of the Amazon Cloud is presented including experimental results with steering by multiple users.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia do Ambiente, Perfil Gestão e Sistemas Ambientais

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Floating-point computing with more than one TFLOP of peak performance is already a reality in recent Field-Programmable Gate Arrays (FPGA). General-Purpose Graphics Processing Units (GPGPU) and recent many-core CPUs have also taken advantage of the recent technological innovations in integrated circuit (IC) design and had also dramatically improved their peak performances. In this paper, we compare the trends of these computing architectures for high-performance computing and survey these platforms in the execution of algorithms belonging to different scientific application domains. Trends in peak performance, power consumption and sustained performances, for particular applications, show that FPGAs are increasing the gap to GPUs and many-core CPUs moving them away from high-performance computing with intensive floating-point calculations. FPGAs become competitive for custom floating-point or fixed-point representations, for smaller input sizes of certain algorithms, for combinational logic problems and parallel map-reduce problems. © 2014 Technical University of Munich (TUM).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Towpregs based on different fibres and thermoplastic matrices were processed for highly demanding and more commercial applications by different composite processing technologies. In the technologies used, compression moulding and pultrusion, the final composite pr ocessing parameters were studied in order to obtain composites with adequate properties at industrial compatible production rates. The produced towpregs were tested to verify its polymer content and degree of impregnation. The obtained results have shown t hat the coating line enabled to produce, with efficiency and industrial scale speed rates, thermoplastic matrix towpregs that may be used to manufacture composites for advanced and larger volume commercial markets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is on offshore wind energy conversion systems installed on the deep water and equipped with back-to-back neutral point clamped full-power converter, permanent magnet synchronous generator with an AC link. The model for the drive train is a five-mass model which incorporates the dynamic of the structure and the tower in order to emulate the effect of the moving surface. A three-level converter and a four-level converter are the two options with a fractional-order control strategy considered to equip the conversion system. Simulation studies are carried out to assess the quality of the energy injected into the electric grid. Finally, conclusions are presented. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the last twenty years genetic algorithms (GAs) were applied in a plethora of fields such as: control, system identification, robotics, planning and scheduling, image processing, and pattern and speech recognition (Bäck et al., 1997). In robotics the problems of trajectory planning, collision avoidance and manipulator structure design considering a single criteria has been solved using several techniques (Alander, 2003). Most engineering applications require the optimization of several criteria simultaneously. Often the problems are complex, include discrete and continuous variables and there is no prior knowledge about the search space. These kind of problems are very more complex, since they consider multiple design criteria simultaneously within the optimization procedure. This is known as a multi-criteria (or multiobjective) optimization, that has been addressed successfully through GAs (Deb, 2001). The overall aim of multi-criteria evolutionary algorithms is to achieve a set of non-dominated optimal solutions known as Pareto front. At the end of the optimization procedure, instead of a single optimal (or near optimal) solution, the decision maker can select a solution from the Pareto front. Some of the key issues in multi-criteria GAs are: i) the number of objectives, ii) to obtain a Pareto front as wide as possible and iii) to achieve a Pareto front uniformly spread. Indeed, multi-objective techniques using GAs have been increasing in relevance as a research area. In 1989, Goldberg suggested the use of a GA to solve multi-objective problems and since then other researchers have been developing new methods, such as the multi-objective genetic algorithm (MOGA) (Fonseca & Fleming, 1995), the non-dominated sorted genetic algorithm (NSGA) (Deb, 2001), and the niched Pareto genetic algorithm (NPGA) (Horn et al., 1994), among several other variants (Coello, 1998). In this work the trajectory planning problem considers: i) robots with 2 and 3 degrees of freedom (dof ), ii) the inclusion of obstacles in the workspace and iii) up to five criteria that are used to qualify the evolving trajectory, namely the: joint traveling distance, joint velocity, end effector / Cartesian distance, end effector / Cartesian velocity and energy involved. These criteria are used to minimize the joint and end effector traveled distance, trajectory ripple and energy required by the manipulator to reach at destination point. Bearing this ideas in mind, the paper addresses the planning of robot trajectories, meaning the development of an algorithm to find a continuous motion that takes the manipulator from a given starting configuration up to a desired end position without colliding with any obstacle in the workspace. The chapter is organized as follows. Section 2 describes the trajectory planning and several approaches proposed in the literature. Section 3 formulates the problem, namely the representation adopted to solve the trajectory planning and the objectives considered in the optimization. Section 4 studies the algorithm convergence. Section 5 studies a 2R manipulator (i.e., a robot with two rotational joints/links) when the optimization trajectory considers two and five objectives. Sections 6 and 7 show the results for the 3R redundant manipulator with five goals and for other complementary experiments are described, respectively. Finally, section 8 draws the main conclusions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, a comparative study on different drill point geometries and feed rate for composite laminates drilling is presented. For this goal, thrust force monitoring during drilling, hole wall roughness measurement and delamination extension assessment after drilling is accomplished. Delamination is evaluated using enhanced radiography combined with a dedicated computational platform that integrates algorithms of image processing and analysis. An experimental procedure was planned and consequences were evaluated. Results show that a cautious combination of the factors involved, like drill tip geometry or feed rate, can promote the reduction of delamination damage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this paper is to discuss the linear solution of equality constrained problems by using the Frontal solution method without explicit assembling. Design/methodology/approach - Re-written frontal solution method with a priori pivot and front sequence. OpenMP parallelization, nearly linear (in elimination and substitution) up to 40 threads. Constraints enforced at the local assembling stage. Findings - When compared with both standard sparse solvers and classical frontal implementations, memory requirements and code size are significantly reduced. Research limitations/implications - Large, non-linear problems with constraints typically make use of the Newton method with Lagrange multipliers. In the context of the solution of problems with large number of constraints, the matrix transformation methods (MTM) are often more cost-effective. The paper presents a complete solution, with topological ordering, for this problem. Practical implications - A complete software package in Fortran 2003 is described. Examples of clique-based problems are shown with large systems solved in core. Social implications - More realistic non-linear problems can be solved with this Frontal code at the core of the Newton method. Originality/value - Use of topological ordering of constraints. A-priori pivot and front sequences. No need for symbolic assembling. Constraints treated at the core of the Frontal solver. Use of OpenMP in the main Frontal loop, now quantified. Availability of Software.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Though the formal mathematical idea of introducing noninteger order derivatives can be traced from the 17th century in a letter by L’Hospital in which he asked Leibniz what the meaning of D n y if n = 1/2 would be in 1695 [1], it was better outlined only in the 19th century [2, 3, 4]. Due to the lack of clear physical interpretation their first applications in physics appeared only later, in the 20th century, in connection with visco-elastic phenomena [5, 6]. The topic later obtained quite general attention [7, 8, 9], and also found new applications in material science [10], analysis of earth-quake signals [11], control of robots [12], and in the description of diffusion [13], etc.