857 resultados para Facial Object Based Method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este artigo tem como objetivo mostrar que é possível incentivar a aprendizagem informal em museus através da construção de comunidades virtuais, com base em repositórios de objetos de aprendizagem, ferramentas comunicacionais e produção de OAs por parte dos visitantes. O enfoque é incentivar a aprendizagem no sentido de motivar a participação/envolvimento do visitante nas atividades da comunidade virtual. Nesta perspectiva, partimos do pressuposto de que a informação, a comunicação, a interação e a cooperação são essenciais para o processo de aprender no contexto informal dos museus. Acreditamos que a interação e a cooperação são partes integrantes do processo de aprendizagem proporcionado por comunidades virtuais e que o principal recurso de aprendizagem oferecido nessas comunidades são os objetos de aprendizagem. Diante do exposto, construímos a Comunidade Virtual do Muzar e realizamos uma experimentação do ambiente de modo a verificar o quanto os visitantes são incentivados a produzir novos conhecimentos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AIRES, Kelson R. T. ; ARAÚJO, Hélder J. ; MEDEIROS, Adelardo A. D. . Plane Detection from Monocular Image Sequences. In: VISUALIZATION, IMAGING AND IMAGE PROCESSING, 2008, Palma de Mallorca, Spain. Proceedings..., Palma de Mallorca: VIIP, 2008

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present document deals with the optimization of shape of aerodynamic profiles -- The objective is to reduce the drag coefficient on a given profile without penalising the lift coefficient -- A set of control points defining the geometry are passed and parameterized as a B-Spline curve -- These points are modified automatically by means of CFD analysis -- A given shape is defined by an user and a valid volumetric CFD domain is constructed from this planar data and a set of user-defined parameters -- The construction process involves the usage of 2D and 3D meshing algorithms that were coupled into own- code -- The volume of air surrounding the airfoil and mesh quality are also parametrically defined -- Some standard NACA profiles were used by obtaining first its control points in order to test the algorithm -- Navier-Stokes equations were solved for turbulent, steady-state ow of compressible uids using the k-epsilon model and SIMPLE algorithm -- In order to obtain data for the optimization process an utility to extract drag and lift data from the CFD simulation was added -- After a simulation is run drag and lift data are passed to the optimization process -- A gradient-based method using the steepest descent was implemented in order to define the magnitude and direction of the displacement of each control point -- The control points and other parameters defined as the design variables are iteratively modified in order to achieve an optimum -- Preliminary results on conceptual examples show a decrease in drag and a change in geometry that obeys to aerodynamic behavior principles

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cork oak is the second most dominant forest species in Portugal and makes this country the world leader in cork export. Occupational exposure to Chrysonilia sitophila and the Penicillium glabrum complex in cork industry is common, and the latter fungus is associated with suberosis. However, as conventional methods seem to underestimate its presence in occupational environments, the aim of our study was to see whether information obtained by polymerase chain reaction (PCR), a molecular-based method, can complement conventional findings and give a better insight into occupational exposure of cork industry workers. We assessed fungal contamination with the P. glabrum complex in three cork manufacturing plants in the outskirts of Lisbon using both conventional and molecular methods. Conventional culturing failed to detect the fungus at six sampling sites in which PCR did detect it. This confirms our assumption that the use of complementing methods can provide information for a more accurate assessment of occupational exposure to the P. glabrum complex in cork industry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As a way to gain greater insights into the operation of online communities, this dissertation applies automated text mining techniques to text-based communication to identify, describe and evaluate underlying social networks among online community members. The main thrust of the study is to automate the discovery of social ties that form between community members, using only the digital footprints left behind in their online forum postings. Currently, one of the most common but time consuming methods for discovering social ties between people is to ask questions about their perceived social ties. However, such a survey is difficult to collect due to the high investment in time associated with data collection and the sensitive nature of the types of questions that may be asked. To overcome these limitations, the dissertation presents a new, content-based method for automated discovery of social networks from threaded discussions, referred to as ‘name network’. As a case study, the proposed automated method is evaluated in the context of online learning communities. The results suggest that the proposed ‘name network’ method for collecting social network data is a viable alternative to costly and time-consuming collection of users’ data using surveys. The study also demonstrates how social networks produced by the ‘name network’ method can be used to study online classes and to look for evidence of collaborative learning in online learning communities. For example, educators can use name networks as a real time diagnostic tool to identify students who might need additional help or students who may provide such help to others. Future research will evaluate the usefulness of the ‘name network’ method in other types of online communities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AIRES, Kelson R. T.; ARAÚJO, Hélder J.; MEDEIROS, Adelardo A. D. Plane Detection Using Affine Homography. In: CONGRESSO BRASILEIRO DE AUTOMÁTICA, 2008, Juiz de Fora, MG: Anais... do CBA 2008.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nonlinear thermo-mechanical properties of advanced polymers are crucial to accurate prediction of the process induced warpage and residual stress of electronics packages. The Fiber Bragg grating (FBG) sensor based method is advanced and implemented to determine temperature and time dependent nonlinear properties. The FBG sensor is embedded in the center of the cylindrical specimen, which deforms together with the specimen. The strains of the specimen at different loading conditions are monitored by the FBG sensor. Two main sources of the warpage are considered: curing induced warpage and coefficient of thermal expansion (CTE) mismatch induced warpage. The effective chemical shrinkage and the equilibrium modulus are needed for the curing induced warpage prediction. Considering various polymeric materials used in microelectronic packages, unique curing setups and procedures are developed for elastomers (extremely low modulus, medium viscosity, room temperature curing), underfill materials (medium modulus, low viscosity, high temperature curing), and epoxy molding compound (EMC: high modulus, high viscosity, high temperature pressure curing), most notably, (1) zero-constraint mold for elastomers; (2) a two-stage curing procedure for underfill materials and (3) an air-cylinder based novel setup for EMC. For the CTE mismatch induced warpage, the temperature dependent CTE and the comprehensive viscoelastic properties are measured. The cured cylindrical specimen with a FBG sensor embedded in the center is further used for viscoelastic property measurements. A uni-axial compressive loading is applied to the specimen to measure the time dependent Young’s modulus. The test is repeated from room temperature to the reflow temperature to capture the time-temperature dependent Young’s modulus. A separate high pressure system is developed for the bulk modulus measurement. The time temperature dependent bulk modulus is measured at the same temperatures as the Young’s modulus. The master curve of the Young’s modulus and bulk modulus of the EMC is created and a single set of the shift factors is determined from the time temperature superposition. The supplementary experiments are conducted to verify the validity of the assumptions associated with the linear viscoelasticity. The measured time-temperature dependent properties are further verified by a shadow moiré and Twyman/Green test.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

LOPES-DOS-SANTOS, V. , CONDE-OCAZIONEZ, S. ; NICOLELIS, M. A. L. , RIBEIRO, S. T. , TORT, A. B. L. . Neuronal assembly detection and cell membership specification by principal component analysis. Plos One, v. 6, p. e20996, 2011.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The first part of the thesis describes a new patterning technique--microfluidic contact printing--that combines several of the desirable aspects of microcontact printing and microfluidic patterning and addresses some of their important limitations through the integration of a track-etched polycarbonate (PCTE) membrane. Using this technique, biomolecules (e.g., peptides, polysaccharides, and proteins) were printed in high fidelity on a receptor modified polyacrylamide hydrogel substrate. The patterns obtained can be controlled through modifications of channel design and secondary programming via selective membrane wetting. The protocols support the printing of multiple reagents without registration steps and fast recycle times. The second part describes a non-enzymatic, isothermal method to discriminate single nucleotide polymorphisms (SNPs). SNP discrimination using alkaline dehybridization has long been neglected because the pH range in which thermodynamic discrimination can be done is quite narrow. We found, however, that SNPs can be discriminated by the kinetic differences exhibited in the dehybridization of PM and MM DNA duplexes in an alkaline solution using fluorescence microscopy. We combined this method with multifunctional encoded hydrogel particle array (fabricated by stop-flow lithography) to achieve fast kinetics and high versatility. This approach may serve as an effective alternative to temperature-based method for analyzing unamplified genomic DNA in point-of-care diagnostic.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The communication in vehicular ad hoc networks (VANETs) is commonly divided in two scenarios, namely vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I). Aiming at establishing secure communication against eavesdroppers, recent works have proposed the exchange of secret keys based on the variation in received signal strength (RSS). However, the performance of such scheme depends on the channel variation rate, being more appropriate for scenarios where the channel varies rapidly, as is usually the case with V2V communication. In the communication V2I, the channel commonly undergoes slow fading. In this work we propose the use of multiple antennas in order to artificially generate a fast fading channel so that the extraction of secret keys out of the RSS becomes feasible in a V2I scenario. Numerical analysis shows that the proposed model can outperform, in terms of secret bit extraction rate, a frequency hopping-based method proposed in the literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Automatic video segmentation plays a vital role in sports videos annotation. This paper presents a fully automatic and computationally efficient algorithm for analysis of sports videos. Various methods of automatic shot boundary detection have been proposed to perform automatic video segmentation. These investigations mainly concentrate on detecting fades and dissolves for fast processing of the entire video scene without providing any additional feedback on object relativity within the shots. The goal of the proposed method is to identify regions that perform certain activities in a scene. The model uses some low-level feature video processing algorithms to extract the shot boundaries from a video scene and to identify dominant colours within these boundaries. An object classification method is used for clustering the seed distributions of the dominant colours to homogeneous regions. Using a simple tracking method a classification of these regions to active or static is performed. The efficiency of the proposed framework is demonstrated over a standard video benchmark with numerous types of sport events and the experimental results show that our algorithm can be used with high accuracy for automatic annotation of active regions for sport videos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motion planning, or trajectory planning, commonly refers to a process of converting high-level task specifications into low-level control commands that can be executed on the system of interest. For different applications, the system will be different. It can be an autonomous vehicle, an Unmanned Aerial Vehicle(UAV), a humanoid robot, or an industrial robotic arm. As human machine interaction is essential in many of these systems, safety is fundamental and crucial. Many of the applications also involve performing a task in an optimal manner within a given time constraint. Therefore, in this thesis, we focus on two aspects of the motion planning problem. One is the verification and synthesis of the safe controls for autonomous ground and air vehicles in collision avoidance scenarios. The other part focuses on the high-level planning for the autonomous vehicles with the timed temporal constraints. In the first aspect of our work, we first propose a verification method to prove the safety and robustness of a path planner and the path following controls based on reachable sets. We demonstrate the method on quadrotor and automobile applications. Secondly, we propose a reachable set based collision avoidance algorithm for UAVs. Instead of the traditional approaches of collision avoidance between trajectories, we propose a collision avoidance scheme based on reachable sets and tubes. We then formulate the problem as a convex optimization problem seeking control set design for the aircraft to avoid collision. We apply our approach to collision avoidance scenarios of quadrotors and fixed-wing aircraft. In the second aspect of our work, we address the high level planning problems with timed temporal logic constraints. Firstly, we present an optimization based method for path planning of a mobile robot subject to timed temporal constraints, in a dynamic environment. Temporal logic (TL) can address very complex task specifications such as safety, coverage, motion sequencing etc. We use metric temporal logic (MTL) to encode the task specifications with timing constraints. We then translate the MTL formulae into mixed integer linear constraints and solve the associated optimization problem using a mixed integer linear program solver. We have applied our approach on several case studies in complex dynamical environments subjected to timed temporal specifications. Secondly, we also present a timed automaton based method for planning under the given timed temporal logic specifications. We use metric interval temporal logic (MITL), a member of the MTL family, to represent the task specification, and provide a constructive way to generate a timed automaton and methods to look for accepting runs on the automaton to find an optimal motion (or path) sequence for the robot to complete the task.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As one of the newest members in Articial Immune Systems (AIS), the Dendritic Cell Algorithm (DCA) has been applied to a range of problems. These applications mainly belong to the eld of anomaly detection. However, real-time detection, a new challenge to anomaly detection, requires improvement on the real-time capability of the DCA. To assess such capability, formal methods in the research of real-time systems can be employed. The ndings of the assessment can provide guideline for the future development of the algorithm. Therefore, in this paper we use an interval logic based method, named the Duration Calcu- lus (DC), to specify a simplied single-cell model of the DCA. Based on the DC specications with further induction, we nd that each individual cell in the DCA can perform its function as a detector in real-time. Since the DCA can be seen as many such cells operating in parallel, it is potentially capable of performing real-time detection. However, the analysis process of the standard DCA constricts its real-time capability. As a result, we conclude that the analysis process of the standard DCA should be replaced by a real-time analysis component, which can perform periodic analysis for the purpose of real-time detection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a method for scheduling tariff time periods for electricity consumers. Europe will see a broader use of modern smart meters for electricity at residential consumers which must be used for enabling demand response. A heuristic-based method for tariff time period scheduling and pricing is proposed which considers different consumer groups with parameters studied a priori, taking advantage of demand response potential for each group and the fairness of electricity pricing for all consumers. This tool was applied to the case of Portugal, considering the actual network and generation costs, specific consumption profiles and overall electricity low voltage demand diagram. The proposed method achieves valid results. Its use will provide justification for the setting of tariff time periods by energy regulators, network operators and suppliers. It is also useful to estimate the consumer and electric sector benefits from changes in tariff time periods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is “power estimation”. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.