26 resultados para Using an harmonic instrument
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Because of technical principles, samples to be observed with electron microscopy need to be fixed in a chemical process and exposed to vacuum conditions that can produce some changes in the morphology of the specimen. The aim of this work was to obtain high-resolution images of the fresh articular cartilage surface with an environmental scanning electron microscope (ESEM), which is an instrument that permits examination of biological specimens without fixation methods in a 10 Torr chamber pressure, thus minimizing the risk of creating artifacts in the structure. Samples from weight-bearing areas of femoral condyles of New Zealand white rabbits were collected and photographed using an ESEM. Images were analyzed using a categorization based in the Jurvelin classification system modified by Hong and Henderson. Appearance of the observed elevations and depressions as described in the classification were observed, but no fractures or splits of cartilage surface, thought to be artifacts, were detected. The ESEM is a useful tool to obtain images of fresh articular cartilage surface appearance without either employing fixation methods or exposing the specimen to extreme vacuum conditions, reducing the risk of introducing artifacts within the specimen. For all these reasons it could become a useful tool for quality control of the preservation process of osteochondral allografting in a bank of musculoskeletal tissues.
Resumo:
In this paper we describe a system for underwater navigation with AUVs in partially structured environments, such as dams, ports or marine platforms. An imaging sonar is used to obtain information about the location of planar structures present in such environments. This information is incorporated into a feature-based SLAM algorithm in a two step process: (I) the full 360deg sonar scan is undistorted (to compensate for vehicle motion), thresholded and segmented to determine which measurements correspond to planar environment features and which should be ignored; and (2) SLAM proceeds once the data association is obtained: both the vehicle motion and the measurements whose correct association has been previously determined are incorporated in the SLAM algorithm. This two step delayed SLAM process allows to robustly determine the feature and vehicle locations in the presence of large amounts of spurious or unrelated measurements that might correspond to boats, rocks, etc. Preliminary experiments show the viability of the proposed approach
Resumo:
This paper proposes a multicast implementation based on adaptive routing with anticipated calculation. Three different cost measures for a point-to-multipoint connection: bandwidth cost, connection establishment cost and switching cost can be considered. The application of the method based on pre-evaluated routing tables makes possible the reduction of bandwidth cost and connection establishment cost individually
Resumo:
Monitoring thunderstorms activity is an essential part of operational weather surveillance given their potential hazards, including lightning, hail, heavy rainfall, strong winds or even tornadoes. This study has two main objectives: firstly, the description of a methodology, based on radar and total lightning data to characterise thunderstorms in real-time; secondly, the application of this methodology to 66 thunderstorms that affected Catalonia (NE Spain) in the summer of 2006. An object-oriented tracking procedure is employed, where different observation data types generate four different types of objects (radar 1-km CAPPI reflectivity composites, radar reflectivity volumetric data, cloud-to-ground lightning data and intra-cloud lightning data). In the framework proposed, these objects are the building blocks of a higher level object, the thunderstorm. The methodology is demonstrated with a dataset of thunderstorms whose main characteristics, along the complete life cycle of the convective structures (development, maturity and dissipation), are described statistically. The development and dissipation stages present similar durations in most cases examined. On the contrary, the duration of the maturity phase is much more variable and related to the thunderstorm intensity, defined here in terms of lightning flash rate. Most of the activity of IC and CG flashes is registered in the maturity stage. In the development stage little CG flashes are observed (2% to 5%), while for the dissipation phase is possible to observe a few more CG flashes (10% to 15%). Additionally, a selection of thunderstorms is used to examine general life cycle patterns, obtained from the analysis of normalized (with respect to thunderstorm total duration and maximum value of variables considered) thunderstorm parameters. Among other findings, the study indicates that the normalized duration of the three stages of thunderstorm life cycle is similar in most thunderstorms, with the longest duration corresponding to the maturity stage (approximately 80% of the total time).
Resumo:
This article reports the phase behavior determi- nation of a system forming reverse liquid crystals and the formation of novel disperse systems in the two-phase region. The studied system is formed by water, cyclohexane, and Pluronic L-121, an amphiphilic block copolymer considered of special interest due to its aggregation and structural proper- ties. This system forms reverse cubic (I2) and reverse hexagonal (H2) phases at high polymer concentrations. These reverse phases are of particular interest since in the two-phase region, stable high internal phase reverse emulsions can be formed. The characterization of the I2 and H2 phases and of the derived gel emulsions was performed with small-angle X-ray scattering (SAXS) and rheometry, and the influence of temperature and water content was studied. TheH2 phase experimented a thermal transition to an I2 phase when temperature was increased, which presented an Fd3m structure. All samples showed a strong shear thinning behavior from low shear rates. The elasticmodulus (G0) in the I2 phase was around 1 order of magnitude higher than in theH2 phase. G0 was predominantly higher than the viscousmodulus (G00). In the gel emulsions,G0 was nearly frequency-independent, indicating their gel type nature. Contrarily to water-in-oil (W/O) normal emulsions, in W/I2 and W/H2 gel emulsions, G0, the complex viscosity (|η*|), and the yield stress (τ0) decreased with increasing water content, since the highly viscous microstructure of the con- tinuous phase was responsible for the high viscosity and elastic behavior of the emulsions, instead of the volumefraction of dispersed phase and droplet size. A rheological analysis, in which the cooperative flow theory, the soft glass rheology model, and the slip plane model were analyzed and compared, was performed to obtain one single model that could describe the non-Maxwellian behavior of both reverse phases and highly concentrated emulsions and to characterize their microstructure with the rheological properties.
Resumo:
The following paper introduces a new approach to the analysis of offensive game in football. Therefore, the main aim of this study was to create an instrument for collecting information for the analysis of offensive action and interactions game. The observation instrument that was used to accomplish the main objective of this work consists of a combination of format fields (FC) and systems of categories (SC). This methodology is a particular strategy of the scientific method that has as an objective to analyse the perceptible behaviour that occurs in habitual contexts, allowing them to be formally recorded and quantified and using an ad hoc instrument in order to obtain a behaviour systematic registration that, since they have been transformed in quantitative data with the necessary reliability and validity determined level, will allow analysis of the relations between these behaviours. The codifications undertaken to date in various games of football have shown that it serves the purposes for which it was developed, allowing more research into the offensive game methods in football.
Resumo:
It is common to find in experimental data persistent oscillations in the aggregate outcomes and high levels of heterogeneity in individual behavior. Furthermore, it is not unusual to find significant deviations from aggregate Nash equilibrium predictions. In this paper, we employ an evolutionary model with boundedly rational agents to explain these findings. We use data from common property resource experiments (Casari and Plott, 2003). Instead of positing individual-specific utility functions, we model decision makers as selfish and identical. Agent interaction is simulated using an individual learning genetic algorithm, where agents have constraints in their working memory, a limited ability to maximize, and experiment with new strategies. We show that the model replicates most of the patterns that can be found in common property resource experiments.
Resumo:
We construct estimates of educational attainment for a sample of OECD countries using previously unexploited sources. We follow a heuristic approach to obtain plausible time profiles for attainment levels by removing sharp breaks in the data that seem to reflect changes in classification criteria. We then construct indicators of the information content of our series and a number of previously available data sets and examine their performance in several growth specifications. We find a clear positive correlation between data quality and the size and significance of human capital coefficients in growth regressions. Using an extension of the classical errors in variables model, we construct a set of meta-estimates of the coefficient of years of schooling in an aggregate Cobb-Douglas production function. Our results suggest that, after correcting for measurement error bias, the value of this parameter is well above 0.50.
Resumo:
TCP flows from applications such as the web or ftp are well supported by a Guaranteed Minimum Throughput Service (GMTS), which provides a minimum network throughput to the flow and, if possible, an extra throughput. We propose a scheme for a GMTS using Admission Control (AC) that is able to provide different minimum throughput to different users and that is suitable for "standard" TCP flows. Moreover, we consider a multidomain scenario where the scheme is used in one of the domains, and we propose some mechanisms for the interconnection with neighbor domains. The whole scheme uses a small set of packet classes in a core-stateless network where each class has a different discarding priority in queues assigned to it. The AC method involves only edge nodes and uses a special probing packet flow (marked as the highest discarding priority class) that is sent continuously from ingress to egress through a path. The available throughput in the path is obtained at the egress using measurements of flow aggregates, and then it is sent back to the ingress. At the ingress each flow is detected using an implicit way and then it is admission controlled. If it is accepted, it receives the GMTS and its packets are marked as the lowest discarding priority classes; otherwise, it receives a best-effort service. The scheme is evaluated through simulation in a simple "bottleneck" topology using different traffic loads consisting of "standard" TCP flows that carry files of varying sizes
Resumo:
CO2 emissions induced by human activities are the major cause of climate change; hence, strong environmental policy that limits the growing dependence on fossil fuel is indispensable. Tradable permits and environmental taxes are the usual tools used in CO2 reduction strategies. Such economic tools provide incentives to polluting industries to reduce their emissions through market signals. The aim of this work is to investigate the direct and indirect effects of an environmental tax on Spanish products and services. We apply an environmentally extended input-output (EIO) model to identify CO2 emission intensities of products and services and, accordingly, we estimate the tax proportional to these intensities. The short-term price effects are analyzed using an input-output price model. The effect of tax introduction on consumption prices and its influence on consumers’ welfare are determined. We also quantify the environmental impacts of such taxation in terms of the reduction in CO2 emissions. The results, based on the Spanish economy for the year 2007, show that sectors with relatively poor environmental profile are subjected to high environmental tax rates. And consequently, applying a CO2 tax on these sectors, increases production prices and induces a slight increase in consumer price index and a decrease in private welfare. The revenue from the tax could be used to counter balance the negative effects on social welfare and also to stimulate the increase of renewable energy shares in the most impacting sectors. Finally, our analysis highlights that the environmental and economic goals cannot be met at the same time with the environmental taxation and this shows the necessity of finding other (complementary or alternative) measures to ensure both the economic and ecological efficiencies. Keywords: CO2 emissions; environmental tax; input-output model, effects of environmental taxation.
Resumo:
This paper presents the design and implementation of a mission control system (MCS) for an autonomous underwater vehicle (AUV) based on Petri nets. In the proposed approach the Petri nets are used to specify as well as to execute the desired autonomous vehicle mission. The mission is easily described using an imperative programming language called mission control language (MCL) that formally describes the mission execution thread. A mission control language compiler (MCL-C) able to automatically translate the MCL into a Petri net is described and a real-time Petri net player that allows to execute the resulting Petri net onboard an AUV are also presented
Resumo:
Nanomotors are nanoscale devices capable of converting energy into movement and forces. Among them, self-propelled nanomotors offer considerable promise for developing new and novel bioanalytical and biosensing strategies based on the direct isolation of target biomolecules or changes in their movement in the presence of target analytes. The mainachievements of this project consists on the development of receptor-functionalized nanomotors that offer direct and rapid target detection, isolation and transport from raw biological samples without preparatory and washing steps. For example, microtube engines functionalized with aptamer, antibody, lectin and enzymes receptors were used for the direct isolation of analytes of biomedical interest, including proteins and whole cells, among others. A target protein was also isolated from a complex sample by using an antigen-functionalized microengine navigating into the reservoirs of a lab-on-a-chip device. The new nanomotorbased target biomarkers detection strategy not only offers highly sensitive, rapid, simple and low cost alternative for the isolation and transport of target molecules, but also represents a new dimension of analytical information based on motion. The recognition events can be easily visualized by optical microscope (without any sophisticated analytical instrument) to reveal the target presence and concentration. The use of artificial nanomachines has shown not only to be useful for (bio)recognition and (bio)transport but also for detection of environmental contamination and remediation. In this context, micromotors modified with superhydrophobic layer demonstrated that effectively interacted, captured, transported and removed oil droplets from oil contaminated samples. Finally, a unique micromotor-based strategy for water-quality testing, that mimics live-fish water-quality testing, based on changes in the propulsion behavior of artificial biocatalytic microswimmers in the presence of aquatic pollutants was also developed. The attractive features of the new micromachine-based target isolation and signal transduction protocols developed in this project offer numerous potential applications in biomedical diagnostics, environmental monitoring, and forensic analysis.
Resumo:
Human arteries affected by atherosclerosis are characterized by altered wall viscoelastic properties. The possibility of noninvasively assessing arterial viscoelasticity in vivo would significantly contribute to the early diagnosis and prevention of this disease. This paper presents a noniterative technique to estimate the viscoelastic parameters of a vascular wall Zener model. The approach requires the simultaneous measurement of flow variations and wall displacements, which can be provided by suitable ultrasound Doppler instruments. Viscoelastic parameters are estimated by fitting the theoretical constitutive equations to the experimental measurements using an ARMA parameter approach. The accuracy and sensitivity of the proposed method are tested using reference data generated by numerical simulations of arterial pulsation in which the physiological conditions and the viscoelastic parameters of the model can be suitably varied. The estimated values quantitatively agree with the reference values, showing that the only parameter affected by changing the physiological conditions is viscosity, whose relative error was about 27% even when a poor signal-to-noise ratio is simulated. Finally, the feasibility of the method is illustrated through three measurements made at different flow regimes on a cylindrical vessel phantom, yielding a parameter mean estimation error of 25%.
Resumo:
From a managerial point of view, the more effcient, simple, and parameter-free (ESP) an algorithm is, the more likely it will be used in practice for solving real-life problems. Following this principle, an ESP algorithm for solving the Permutation Flowshop Sequencing Problem (PFSP) is proposed in this article. Using an Iterated Local Search (ILS) framework, the so-called ILS-ESP algorithm is able to compete in performance with other well-known ILS-based approaches, which are considered among the most effcient algorithms for the PFSP. However, while other similar approaches still employ several parameters that can affect their performance if not properly chosen, our algorithm does not require any particular fine-tuning process since it uses basic "common sense" rules for the local search, perturbation, and acceptance criterion stages of the ILS metaheuristic. Our approach defines a new operator for the ILS perturbation process, a new acceptance criterion based on extremely simple and transparent rules, and a biased randomization process of the initial solution to randomly generate different alternative initial solutions of similar quality -which is attained by applying a biased randomization to a classical PFSP heuristic. This diversification of the initial solution aims at avoiding poorly designed starting points and, thus, allows the methodology to take advantage of current trends in parallel and distributed computing. A set of extensive tests, based on literature benchmarks, has been carried out in order to validate our algorithm and compare it against other approaches. These tests show that our parameter-free algorithm is able to compete with state-of-the-art metaheuristics for the PFSP. Also, the experiments show that, when using parallel computing, it is possible to improve the top ILS-based metaheuristic by just incorporating to it our biased randomization process with a high-quality pseudo-random number generator.
Resumo:
This paper compares two well known scan matching algorithms: the MbICP and the pIC. As a result of the study, it is proposed the MSISpIC, a probabilistic scan matching algorithm for the localization of an Autonomous Underwater Vehicle (AUV). The technique uses range scans gathered with a Mechanical Scanning Imaging Sonar (MSIS), and the robot displacement estimated through dead-reckoning with the help of a Doppler Velocity Log (DVL) and a Motion Reference Unit (MRU). The proposed method is an extension of the pIC algorithm. Its major contribution consists in: 1) using an EKF to estimate the local path traveled by the robot while grabbing the scan as well as its uncertainty and 2) proposing a method to group into a unique scan, with a convenient uncertainty model, all the data grabbed along the path described by the robot. The algorithm has been tested on an AUV guided along a 600m path within a marina environment with satisfactory results