945 resultados para Good performance during a task
Resumo:
Aquaculture of filter-feeding bivalve mollusks involves the fruitful conversion of marine particulate organic matter into premium protein of high nutritive value. Culture performance of bivalves is largely dependent on hydrological conditions and directly affected by e. g. temperature and chlorophyll levels. Accordingly, these parameters may be related with seasonality but also with oceanographic features combined with climate events. Yields of Pacific cupped oyster (Crassostrea gigas) reared at commercial procedures in suspended structures (long-lines) in a sheltered bay in Southern Brazil (Santa Catarina State, 27S 43'; 48 W 30') were evaluated in relation to local environmental conditions: sea surface temperature, chlorophyll a concentration, and associate effects of cold fronts events and El Nino and La Nina periods. Outputs from four consecutive commercial crop years were analyzed (2005/06, 2006/07, 2007/08, 2008/09) in terms of oyster survival and development time during the following grow-out phases of the culture cycle: seed to juvenile, juvenile to adult, adult to marketable. Since culture management and genetics were standardized significant differences verified among crop performance could be mostly related to environmental effects. Time series of temperature and chlorophyll a (remote sensing data) from crop periods displayed significant seasonal and interannual variation. As expected, performance during initial grow-out stages (seed to juvenile) was critical for final crop yield. Temperature was the main factor affecting survival in these initial stages with a trend of negative correlation, though not statistically significant. On the other hand, oyster development rate was significantly and positively affected by chlorophyll a concentration. Chlorophyll a values could be increased by upwelled cold nutrient-rich South Atlantic Central Water (SACW, related to predominant Northern winds) though further dependent on occurrence of Southern winds (cold fronts) to assist seawater penetration into the sheltered farming area. Lower salinity nutrient-rich northward drifted waters from La Plata River discharge may also result in chlorophyll a rise in the farming area. The El Nino period (July 2006 to February 2007) coincided with lower chlorophyll a levels in the farming site that may be related to both decreased number of cold fronts as well as predominance of Northern winds that retain northward spreading of La Plata River discharge waters. In contrast, the La Nina period (August 2007 to June 2008) corresponded to higher chlorophyll a values in the farming area by both upwelling of SACW and penetration of La Plata River discharge water assisted by increased occurrence of Southern winds and cold fronts. The recognition of the potentially changing climate and effects upon the environment will be an important step in planning future development of bivalve aquaculture.
Resumo:
A neural network model to predict ozone concentration in the Sao Paulo Metropolitan Area was developed, based on average values of meteorological variables in the morning (8:00-12:00 hr) and afternoon (13:00-17: 00 hr) periods. Outputs are the maximum and average ozone concentrations in the afternoon (12:00-17:00 hr). The correlation coefficient between computed and measured values was 0.82 and 0.88 for the maximum and average ozone concentration, respectively. The model presented good performance as a prediction tool for the maximum ozone concentration. For prediction periods from 1 to 5 days 0 to 23% failures (95% confidence) were obtained.
Resumo:
BACKGROUND: Alkaline sulfite/anthraquinone (ASA) cooking of Pinus radiata and Pinus caribaea wood chips followed by disk refining was used as a pretreatment for the production of low lignified and high fibrillated pulps. The pulps produced with different delignification degrees and refined at different energy inputs (250, 750 and 1600 Wh) were saccharified with cellulases and fermented to ethanol with Saccharomyces cerevisiae using separated hydrolysis and fermentation (SHF) or semi-simultaneous saccharification and fermentation (SSSF) processes. RESULTS: Delignification of ASA pulps was between 25% and 50%, with low glucans losses. Pulp yield was from 70 to 78% for pulps of P. radiata and 60% for the pulp of P. caribaea. Pulps obtained after refining were evaluated in assays of enzymatic hydrolysis. Glucans-to-glucose conversion varied from 20 to 70%, depending on the degree of delignification and fibrillation of the pulps. The best ASA pulp of P. radiata was used in SHF and SSSF experiments of ethanol production. Such experiments produced maximum ethanol concentration of 20 g L-1, which represented roughly90% of glucose conversion and an estimated amount of 260 L ethanol ton(-1) wood. P. caribaea pulp also presented good performance in the enzymatic hydrolysis and fermentation but, due to the low amount of cellulose present, only 140 L ethanol would be obtained from each ton of wood. CONCLUSION: ASA cooking followed by disk refining was shown to be an efficient pretreatment process, which generated a low lignified and high-fibrillated substrate that allowed the production of ethanol from the softwoods with high conversion yields. (C) 2012 Society of Chemical Industry
Resumo:
In this work, the temperature impact on the off-state current components is analyzed through numerical simulation and experimentally. First of all, the band-to-band tunneling is studied by varying the underlap in the channel/drain junction, leading to an analysis of the different off-state current components. For pTFET devices, the best behavior for off-state current was obtained for higher values of underlap (reduced BTBT) and at low temperatures (reduced SRH and TAT). At high temperature, an unexpected off-state current occurred due to the thermal leakage current through the drain/channel junction. Besides, these devices presented a good performance when considering the drain current as a function of the drain voltage, making them suitable for analog applications. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents an IR and Raman experiment executed during the teaching of the course "Chemical Bonds" for undergraduated students of Science and Technology and Chemistry at the Federal University of ABC, in order to facilitate and encourage the teaching and learning of group theory. Some key aspects of this theory are also outlined. We believe that student learning was more significant with the introduction of this experiment, because there was an increase in the discussions level and in the performance during evaluations. This work also proposes a multidisciplinary approach to include the use of quantum chemistry tools.
Resumo:
The clustering problem consists in finding patterns in a data set in order to divide it into clusters with high within-cluster similarity. This paper presents the study of a problem, here called MMD problem, which aims at finding a clustering with a predefined number of clusters that minimizes the largest within-cluster distance (diameter) among all clusters. There are two main objectives in this paper: to propose heuristics for the MMD and to evaluate the suitability of the best proposed heuristic results according to the real classification of some data sets. Regarding the first objective, the results obtained in the experiments indicate a good performance of the best proposed heuristic that outperformed the Complete Linkage algorithm (the most used method from the literature for this problem). Nevertheless, regarding the suitability of the results according to the real classification of the data sets, the proposed heuristic achieved better quality results than C-Means algorithm, but worse than Complete Linkage.
Resumo:
The performance, carcass traits, and litter humidity of broilers fed increasing levels of glycerine derived from biodiesel production were evaluated. In this experiment, 1,575 broilers were distributed according to a completely randomized experimental design into five treatments with seven replicates of 45 birds each. Treatments consisted of a control diet and four diets containing 2.5, 5.0, 7.5, or 10% glycerine. The experimental diets contained equal nutritional levels and were based on corn, soybean meal and soybean oil. The glycerine included in the diets contained 83.4% glycerol, 1.18% sodium, and 208 ppm methanol, and a calculated energy value of 3,422 kcal AMEn/kg. Performance parameters (weight gain, feed intake, feed conversion ratio, live weight, and livability) were monitored when broilers were 7, 21, and 42 days of age. On day 43, litter humidity was determined in each pen, and 14 birds/treatment were sacrificed for the evaluation of carcass traits. During the period of 1 to 7 days, there was a positive linear effect of the treatments on weight gain, feed intake, and live weight gain. Livability linearly decreased during the period of 1 to 21 days. During the entire experimental period, no significant effects were observed on performance parameters or carcass traits, but there was a linear increase in litter humidity. Therefore, the inclusion of up to 5% glycerine in the diet did not affect broiler performance during the total rearing period.
Resumo:
It is important to develop new methods for diagnosing relapses in the co-infection of visceral leishmaniasis (VL) and HIV to enable earlier detection using less invasive methods. We report a case of a co-infected patient who had relapses after VL treatment, where the qualitative kDNA PCR showed a good performance. The kDNA PCR seems to be a useful tool for diagnosing VL and may be a good marker for predicting VL relapses after treatment of co-infected patients with clinical symptoms of the disease.
Resumo:
[EN]Until recently, sample preparation was carried out using traditional techniques, such as liquid–liquid extraction (LLE), that use large volumes of organic solvents. Solid-phase extraction (SPE) uses much less solvent than LLE, although the volume can still be significant. These preparation methods are expensive, time-consuming and environmentally unfriendly. Recently, a great effort has been made to develop new analytical methodologies able to perform direct analyses using miniaturised equipment, thereby achieving high enrichment factors, minimising solvent consumption and reducing waste. These microextraction techniques improve the performance during sample preparation, particularly in complex water environmental samples, such as wastewaters, surface and ground waters, tap waters, sea and river waters. Liquid chromatography coupled to tandem mass spectrometry (LC/MS/MS) and time-of-flight mass spectrometric (TOF/MS) techniques can be used when analysing a broad range of organic micropollutants. Before separating and detecting these compounds in environmental samples, the target analytes must be extracted and pre-concentrated to make them detectable. In this work, we review the most recent applications of microextraction preparation techniques in different water environmental matrices to determine organic micropollutants: solid-phase microextraction SPME, in-tube solid-phase microextraction (IT-SPME), stir bar sorptive extraction (SBSE) and liquid-phase microextraction (LPME). Several groups of compounds are considered organic micropollutants because these are being released continuously into the environment. Many of these compounds are considered emerging contaminants. These analytes are generally compounds that are not covered by the existing regulations and are now detected more frequently in different environmental compartments. Pharmaceuticals, surfactants, personal care products and other chemicals are considered micropollutants. These compounds must be monitored because, although they are detected in low concentrations, they might be harmful toward ecosystems.
Resumo:
[EN] We analyze the discontinuity preserving problem in TV-L1 optical flow methods. This type of methods typically creates rounded effects at flow boundaries, which usually do not coincide with object contours. A simple strategy to overcome this problem consists in inhibiting the diffusion at high image gradients. In this work, we first introduce a general framework for TV regularizers in optical flow and relate it with some standard approaches. Our survey takes into account several methods that use decreasing functions for mitigating the diffusion at image contours. Consequently, this kind of strategies may produce instabilities in the estimation of the optical flows. Hence, we study the problem of instabilities and show that it actually arises from an ill-posed formulation. From this study, it is possible to come across with different schemes to solve this problem. One of these consists in separating the pure TV process from the mitigating strategy. This has been used in another work and we demonstrate here that it has a good performance. Furthermore, we propose two alternatives to avoid the instability problems: (i) we study a fully automatic approach that solves the problem based on the information of the whole image; (ii) we derive a semi-automatic approach that takes into account the image gradients in a close neighborhood adapting the parameter in each position. In the experimental results, we present a detailed study and comparison between the different alternatives. These methods provide very good results, especially for sequences with a few dominant gradients. Additionally, a surprising effect of these approaches is that they can cope with occlusions. This can be easily achieved by using strong regularizations and high penalizations at image contours.
Resumo:
Interpretación realizada por las alumnas en prácticas de la Facultad de Traducción e Interpretación, Estíbaliz López-Leiton Trujillo, Danaide Rodríguez Hernández, Esther Ramírez Millares.
Resumo:
Recent progress in microelectronic and wireless communications have enabled the development of low cost, low power, multifunctional sensors, which has allowed the birth of new type of networks named wireless sensor networks (WSNs). The main features of such networks are: the nodes can be positioned randomly over a given field with a high density; each node operates both like sensor (for collection of environmental data) as well as transceiver (for transmission of information to the data retrieval); the nodes have limited energy resources. The use of wireless communications and the small size of nodes, make this type of networks suitable for a large number of applications. For example, sensor nodes can be used to monitor a high risk region, as near a volcano; in a hospital they could be used to monitor physical conditions of patients. For each of these possible application scenarios, it is necessary to guarantee a trade-off between energy consumptions and communication reliability. The thesis investigates the use of WSNs in two possible scenarios and for each of them suggests a solution that permits to solve relating problems considering the trade-off introduced. The first scenario considers a network with a high number of nodes deployed in a given geographical area without detailed planning that have to transmit data toward a coordinator node, named sink, that we assume to be located onboard an unmanned aerial vehicle (UAV). This is a practical example of reachback communication, characterized by the high density of nodes that have to transmit data reliably and efficiently towards a far receiver. It is considered that each node transmits a common shared message directly to the receiver onboard the UAV whenever it receives a broadcast message (triggered for example by the vehicle). We assume that the communication channels between the local nodes and the receiver are subject to fading and noise. The receiver onboard the UAV must be able to fuse the weak and noisy signals in a coherent way to receive the data reliably. It is proposed a cooperative diversity concept as an effective solution to the reachback problem. In particular, it is considered a spread spectrum (SS) transmission scheme in conjunction with a fusion center that can exploit cooperative diversity, without requiring stringent synchronization between nodes. The idea consists of simultaneous transmission of the common message among the nodes and a Rake reception at the fusion center. The proposed solution is mainly motivated by two goals: the necessity to have simple nodes (to this aim we move the computational complexity to the receiver onboard the UAV), and the importance to guarantee high levels of energy efficiency of the network, thus increasing the network lifetime. The proposed scheme is analyzed in order to better understand the effectiveness of the approach presented. The performance metrics considered are both the theoretical limit on the maximum amount of data that can be collected by the receiver, as well as the error probability with a given modulation scheme. Since we deal with a WSN, both of these performance are evaluated taking into consideration the energy efficiency of the network. The second scenario considers the use of a chain network for the detection of fires by using nodes that have a double function of sensors and routers. The first one is relative to the monitoring of a temperature parameter that allows to take a local binary decision of target (fire) absent/present. The second one considers that each node receives a decision made by the previous node of the chain, compares this with that deriving by the observation of the phenomenon, and transmits the final result to the next node. The chain ends at the sink node that transmits the received decision to the user. In this network the goals are to limit throughput in each sensor-to-sensor link and minimize probability of error at the last stage of the chain. This is a typical scenario of distributed detection. To obtain good performance it is necessary to define some fusion rules for each node to summarize local observations and decisions of the previous nodes, to get a final decision that it is transmitted to the next node. WSNs have been studied also under a practical point of view, describing both the main characteristics of IEEE802:15:4 standard and two commercial WSN platforms. By using a commercial WSN platform it is realized an agricultural application that has been tested in a six months on-field experimentation.
Resumo:
Impairment of postural control is a common consequence of Parkinson's disease (PD) that becomes more and more critical with the progression of the disease, in spite of the available medications. Postural instability is one of the most disabling features of PD and induces difficulties with postural transitions, initiation of movements, gait disorders, inability to live independently at home, and is the major cause of falls. Falls are frequent (with over 38% falling each year) and may induce adverse consequences like soft tissue injuries, hip fractures, and immobility due to fear of falling. As the disease progresses, both postural instability and fear of falling worsen, which leads patients with PD to become increasingly immobilized. The main aims of this dissertation are to: 1) detect and assess, in a quantitative way, impairments of postural control in PD subjects, investigate the central mechanisms that control such motor performance, and how these mechanism are affected by levodopa; 2) develop and validate a protocol, using wearable inertial sensors, to measure postural sway and postural transitions prior to step initiation; 3) find quantitative measures sensitive to impairments of postural control in early stages of PD and quantitative biomarkers of disease progression; and 4) test the feasibility and effects of a recently-developed audio-biofeedback system in maintaining balance in subjects with PD. In the first set of studies, we showed how PD reduces functional limits of stability as well as the magnitude and velocity of postural preparation during voluntary, forward and backward leaning while standing. Levodopa improves the limits of stability but not the postural strategies used to achieve the leaning. Further, we found a strong relationship between backward voluntary limits of stability and size of automatic postural response to backward perturbations in control subjects and in PD subjects ON medication. Such relation might suggest that the central nervous system presets postural response parameters based on perceived maximum limits and this presetting is absent in PD patients OFF medication but restored with levodopa replacement. Furthermore, we investigated how the size of preparatory postural adjustments (APAs) prior to step initiation depend on initial stance width. We found that patients with PD did not scale up the size of their APA with stance width as much as control subjects so they had much more difficulty initiating a step from a wide stance than from a narrow stance. This results supports the hypothesis that subjects with PD maintain a narrow stance as a compensation for their inability to sufficiently increase the size of their lateral APA to allow speedy step initiation in wide stance. In the second set of studies, we demonstrated that it is possible to use wearable accelerometers to quantify postural performance during quiet stance and step initiation balance tasks in healthy subjects. We used a model to predict center of pressure displacements associated with accelerations at the upper and lower back and thigh. This approach allows the measurement of balance control without the use of a force platform outside the laboratory environment. We used wearable accelerometers on a population of early, untreated PD patients, and found that postural control in stance and postural preparation prior to a step are impaired early in the disease when the typical balance and gait intiation symptoms are not yet clearly manifested. These novel results suggest that technological measures of postural control can be more sensitive than clinical measures. Furthermore, we assessed spontaneous sway and step initiation longitudinally across 1 year in patients with early, untreated PD. We found that changes in trunk sway, and especially movement smoothness, measured as Jerk, could be used as an objective measure of PD and its progression. In the third set of studies, we studied the feasibility of adapting an existing audio-biofeedback device to improve balance control in patients with PD. Preliminary results showed that PD subjects found the system easy-to-use and helpful, and they were able to correctly follow the audio information when available. Audiobiofeedback improved the properties of trunk sway during quiet stance. Our results have many implications for i) the understanding the central mechanisms that control postural motor performance, and how these mechanisms are affected by levodopa; ii) the design of innovative protocols for measuring and remote monitoring of motor performance in the elderly or subjects with PD; and iii) the development of technologies for improving balance, mobility, and consequently quality of life in patients with balance disorders, such as PD patients with augmented biofeedback paradigms.
Resumo:
This thesis deal with the design of advanced OFDM systems. Both waveform and receiver design have been treated. The main scope of the Thesis is to study, create, and propose, ideas and novel design solutions able to cope with the weaknesses and crucial aspects of modern OFDM systems. Starting from the the transmitter side, the problem represented by low resilience to non-linear distortion has been assessed. A novel technique that considerably reduces the Peak-to-Average Power Ratio (PAPR) yielding a quasi constant signal envelope in the time domain (PAPR close to 1 dB) has been proposed.The proposed technique, named Rotation Invariant Subcarrier Mapping (RISM),is a novel scheme for subcarriers data mapping,where the symbols belonging to the modulation alphabet are not anchored, but maintain some degrees of freedom. In other words, a bit tuple is not mapped on a single point, rather it is mapped onto a geometrical locus, which is totally or partially rotation invariant. The final positions of the transmitted complex symbols are chosen by an iterative optimization process in order to minimize the PAPR of the resulting OFDM symbol. Numerical results confirm that RISM makes OFDM usable even in severe non-linear channels. Another well known problem which has been tackled is the vulnerability to synchronization errors. Indeed in OFDM system an accurate recovery of carrier frequency and symbol timing is crucial for the proper demodulation of the received packets. In general, timing and frequency synchronization is performed in two separate phases called PRE-FFT and POST-FFT synchronization. Regarding the PRE-FFT phase, a novel joint symbol timing and carrier frequency synchronization algorithm has been presented. The proposed algorithm is characterized by a very low hardware complexity, and, at the same time, it guarantees very good performance in in both AWGN and multipath channels. Regarding the POST-FFT phase, a novel approach for both pilot structure and receiver design has been presented. In particular, a novel pilot pattern has been introduced in order to minimize the occurrence of overlaps between two pattern shifted replicas. This allows to replace conventional pilots with nulls in the frequency domain, introducing the so called Silent Pilots. As a result, the optimal receiver turns out to be very robust against severe Rayleigh fading multipath and characterized by low complexity. Performance of this approach has been analytically and numerically evaluated. Comparing the proposed approach with state of the art alternatives, in both AWGN and multipath fading channels, considerable performance improvements have been obtained. The crucial problem of channel estimation has been thoroughly investigated, with particular emphasis on the decimation of the Channel Impulse Response (CIR) through the selection of the Most Significant Samples (MSSs). In this contest our contribution is twofold, from the theoretical side, we derived lower bounds on the estimation mean-square error (MSE) performance for any MSS selection strategy,from the receiver design we proposed novel MSS selection strategies which have been shown to approach these MSE lower bounds, and outperformed the state-of-the-art alternatives. Finally, the possibility of using of Single Carrier Frequency Division Multiple Access (SC-FDMA) in the Broadband Satellite Return Channel has been assessed. Notably, SC-FDMA is able to improve the physical layer spectral efficiency with respect to single carrier systems, which have been used so far in the Return Channel Satellite (RCS) standards. However, it requires a strict synchronization and it is also sensitive to phase noise of local radio frequency oscillators. For this reason, an effective pilot tone arrangement within the SC-FDMA frame, and a novel Joint Multi-User (JMU) estimation method for the SC-FDMA, has been proposed. As shown by numerical results, the proposed scheme manages to satisfy strict synchronization requirements and to guarantee a proper demodulation of the received signal.
Resumo:
For the detection of hidden objects by low-frequency electromagnetic imaging the Linear Sampling Method works remarkably well despite the fact that the rigorous mathematical justification is still incomplete. In this work, we give an explanation for this good performance by showing that in the low-frequency limit the measurement operator fulfills the assumptions for the fully justified variant of the Linear Sampling Method, the so-called Factorization Method. We also show how the method has to be modified in the physically relevant case of electromagnetic imaging with divergence-free currents. We present numerical results to illustrate our findings, and to show that similar performance can be expected for the case of conducting objects and layered backgrounds.