918 resultados para Analysis and job descriptions
Resumo:
This paper analyzes the DNA code of several species in the perspective of information content. For that purpose several concepts and mathematical tools are selected towards establishing a quantitative method without a priori distorting the alphabet represented by the sequence of DNA bases. The synergies of associating Gray code, histogram characterization and multidimensional scaling visualization lead to a collection of plots with a categorical representation of species and chromosomes.
Resumo:
It is presented in this paper a study on the photo-electronic properties of multi layer a-Si: H/a-SiC: H p-i-n-i-p structures. This study is aimed to give an insight into the internal electrical characteristics of such a structure in thermal equilibrium, under applied Was and under different illumination condition. Taking advantage of this insight it is possible to establish a relation among-the electrical behavior of the structure the structure geometry (i.e. thickness of the light absorbing intrinsic layers and of the internal n-layer) and the composition of the layers (i.e. optical bandgap controlled through percentage of carbon dilution in the a-Si1-xCx: H layers). Showing an optical gain for low incident light power controllable by means of externally applied bias or structure composition, these structures are quite attractive for photo-sensing device applications, like color sensors and large area color image detector. An analysis based on numerical ASCA simulations is presented for describing the behavior of different configurations of the device and compared with experimental measurements (spectral response and current-voltage characteristic). (c) 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Resumo:
The constant evolution of the Internet and its increasing use and subsequent entailing to private and public activities, resulting in a strong impact on their survival, originates an emerging technology. Through cloud computing, it is possible to abstract users from the lower layers to the business, focusing only on what is most important to manage and with the advantage of being able to grow (or degrades) resources as needed. The paradigm of cloud arises from the necessity of optimization of IT resources evolving in an emergent and rapidly expanding and technology. In this regard, after a study of the most common cloud platforms and the tactic of the current implementation of the technologies applied at the Institute of Biomedical Sciences of Abel Salazar and Faculty of Pharmacy of Oporto University a proposed evolution is suggested in order adorn certain requirements in the context of cloud computing.
Resumo:
Seismic data is difficult to analyze and classical mathematical tools reveal strong limitations in exposing hidden relationships between earthquakes. In this paper, we study earthquake phenomena in the perspective of complex systems. Global seismic data, covering the period from 1962 up to 2011 is analyzed. The events, characterized by their magnitude, geographic location and time of occurrence, are divided into groups, either according to the Flinn-Engdahl (F-E) seismic regions of Earth or using a rectangular grid based in latitude and longitude coordinates. Two methods of analysis are considered and compared in this study. In a first method, the distributions of magnitudes are approximated by Gutenberg-Richter (G-R) distributions and the parameters used to reveal the relationships among regions. In the second method, the mutual information is calculated and adopted as a measure of similarity between regions. In both cases, using clustering analysis, visualization maps are generated, providing an intuitive and useful representation of the complex relationships that are present among seismic data. Such relationships might not be perceived on classical geographic maps. Therefore, the generated charts are a valid alternative to other visualization tools, for understanding the global behavior of earthquakes.
Resumo:
The scope of this paper is to adapt the standard mean-variance model of Henry Markowitz theory, creating a simulation tool to find the optimal configuration of the portfolio aggregator, calculate its profitability and risk. Currently, there is a deep discussion going on among the power system society about the structure and architecture of the future electric system. In this environment, policy makers and electric utilities find new approaches to access the electricity market; this configures new challenging positions in order to find innovative strategies and methodologies. Decentralized power generation is gaining relevance in liberalized markets, and small and medium size electricity consumers are also become producers (“prosumers”). In this scenario an electric aggregator is an entity that joins a group of electric clients, customers, producers, “prosumers” together as a single purchasing unit to negotiate the purchase and sale of electricity. The aggregator conducts research on electricity prices, contract terms and conditions in order to promote better energy prices for their clients and allows small and medium customers to benefit improved market prices.
Resumo:
WiDom is a wireless prioritized medium access control protocol which offers a very large number of priority levels. Hence, it brings the potential to employ non-preemptive static-priority scheduling and schedulability analysis for a wireless channel assuming that the overhead of WiDom is modeled properly. One schedulability analysis for WiDom has already been proposed but recent research has created a new version of WiDom (we call it: Slotted WiDom) with lower overhead and for this version of WiDom no schedulability analysis exists. In this paper we propose a new schedulability analysis for slotted WiDom and extend it to work also for message streams with release jitter. We have performed experiments with an implementation of slotted WiDom on a real-world platform (MicaZ). We find that for each message stream, the maximum observed response time never exceeds the calculated response time and hence this corroborates our belief that our new scheduling theory is applicable in practice.
Resumo:
The contribution of the evapotranspiration from a certain region to the precipitation over the same area is referred to as water recycling. In this paper, we explore the spatiotemporal links between the recycling mechanism and the Iberian rainfall regime. We use a 9 km resolution Weather Research and Forecasting simulation of 18 years (1990-2007) to compute local and regional recycling ratios over Iberia, at the monthly scale, through both an analytical and a numerical recycling model. In contrast to coastal areas, the interior of Iberia experiences a relative maximum of precipitation in spring, suggesting a prominent role of land-atmosphere interactions on the inland precipitation regime during this period of the year. Local recycling ratios are the highest in spring and early summer, coinciding with those areas where this spring peak of rainfall represents the absolute maximum in the annual cycle. This confirms that recycling processes are crucial to explain the Iberian spring precipitation, particularly over the eastern and northeastern sectors. Average monthly recycling values range from 0.04 in December to 0.14 in June according to the numerical model and from 0.03 in December to 0.07 in May according to the analytical procedure. Our analysis shows that the highest values of recycling are limited by the coexistence of two necessary mechanisms: (1) the availability of sufficient soil moisture and (2) the occurrence of appropriate synoptic configurations favoring the development of convective regimes. The analyzed surplus of rainfall in spring has a critical impact on agriculture over large semiarid regions of the interior of Iberia.
Resumo:
Two new metal- organic compounds {[Cu-3(mu(3)-4-(p)tz)(4)(mu(2)-N-3)(2)(DMF)(2)](DMF)(2)}(n) (1) and {[Cu(4ptz) (2)(H2O)(2)]}(n) (2) {4-ptz = 5-(4-pyridyl)tetrazolate} with 3D and 2D coordination networks, respectively, have been synthesized while studying the effect of reaction conditions on the coordination modes of 4-pytz by employing the [2 + 3] cycloaddition as a tool for generating in situ the 5-substituted tetrazole ligands from 4-pyridinecarbonitrile and NaN3 in the presence of a copper(II) salt. The obtained compounds have been structurally characterized and the topological analysis of 1 discloses a topologically unique trinodal 3,5,6-connected 3D network which, upon further simplification, results in a uninodal 8-connected underlying net with the bcu (body centred cubic) topology driven by the [Cu-3(mu(2)-N-3)(2)] cluster nodes and mu(3)-4-ptz linkers. In contrast, the 2D metal-organic network in 2 has been classified as a uninodal 4-connected underlying net with the sql [Shubnikov tetragonal plane net] topology assembled from the Cu nodes and mu(2)-4-ptz linkers. The catalytic investigations disclosed that 1 and 2 act as active catalyst precursors towards the microwave-assisted homogeneous oxidation of secondary alcohols (1-phenylethanol, cyclohexanol, 2-hexanol, 3-hexanol, 2-octanol and 3-octanol) with tert-butylhydroperoxide, leading to the yields of the corresponding ketones up to 86% (TOF = 430 h(-1)) and 58% (TOF = 290 h(-1)) in the oxidation of 1-phenylethanol and cyclohexanol, respectively, after 1 h under low power ( 10 W) microwave irradiation, and in the absence of any added solvent or additive.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Informática
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
A study of chemical transformations of cork during heat treatments was made using colour variation and FTIR analysis. The cork enriched fractions from Quercus cerris bark were subjected to isothermal heating in the temperature range 150–400 ◦C and treatment time from 5 to 90 min. Mass loss ranged from 3% (90 min at 150 ◦C) to 71% (60 min at 350 ◦C). FTIR showed that hemicelluloses were thermally degraded first while suberin remained as the most heat resistant component. The change of CIE-Lab parameters was rapid for low intensity treatments where no significant mass loss occurred (at 150 ◦C L* decreased from the initial 51.5 to 37.3 after 20 min). The decrease in all colour parameters continued with temperature until they remained substantially constant with over 40% mass loss. Modelling of the thermally induced mass loss could be made using colour analysis. This is applicable to monitoring the production of heat expanded insulation agglomerates.
Resumo:
The design of magnetic cores can be carried out by taking into account the optimization of different parameters in accordance with the application requirements. Considering the specifications of the fast field cycling nuclear magnetic resonance (FFC-NMR) technique, the magnetic flux density distribution, at the sample insertion volume, is one of the core parameters that needs to be evaluated. Recently, it has been shown that the FFC-NMR magnets can be built on the basis of solenoid coils with ferromagnetic cores. Since this type of apparatus requires magnets with high magnetic flux density uniformity, a new type of magnet using a ferromagnetic core, copper coils, and superconducting blocks was designed with improved magnetic flux density distribution. In this paper, the designing aspects of the magnet are described and discussed with emphasis on the improvement of the magnetic flux density homogeneity (Delta B/B-0) in the air gap. The magnetic flux density distribution is analyzed based on 3-D simulations and NMR experimental results.
Resumo:
Wireless Body Area Network (WBAN) is the most convenient, cost-effective, accurate, and non-invasive technology for e-health monitoring. The performance of WBAN may be disturbed when coexisting with other wireless networks. Accordingly, this paper provides a comprehensive study and in-depth analysis of coexistence issues and interference mitigation solutions in WBAN technologies. A thorough survey of state-of-the art research in WBAN coexistence issues is conducted. The survey classified, discussed, and compared the studies according to the parameters used to analyze the coexistence problem. Solutions suggested by the studies are then classified according to the followed techniques and concomitant shortcomings are identified. Moreover, the coexistence problem in WBAN technologies is mathematically analyzed and formulas are derived for the probability of successful channel access for different wireless technologies with the coexistence of an interfering network. Finally, extensive simulations are conducted using OPNET with several real-life scenarios to evaluate the impact of coexistence interference on different WBAN technologies. In particular, three main WBAN wireless technologies are considered: IEEE 802.15.6, IEEE 802.15.4, and low-power WiFi. The mathematical analysis and the simulation results are discussed and the impact of interfering network on the different wireless technologies is compared and analyzed. The results show that an interfering network (e.g., standard WiFi) has an impact on the performance of WBAN and may disrupt its operation. In addition, using low-power WiFi for WBANs is investigated and proved to be a feasible option compared to other wireless technologies.
Resumo:
Auditory event-related potentials (AERPs) are widely used in diverse fields of today’s neuroscience, concerning auditory processing, speech perception, language acquisition, neurodevelopment, attention and cognition in normal aging, gender, developmental, neurologic and psychiatric disorders. However, its transposition to clinical practice has remained minimal. Mainly due to scarce literature on normative data across age, wide spectrumof results, variety of auditory stimuli used and to different neuropsychological meanings of AERPs components between authors. One of the most prominent AERP components studied in last decades was N1, which reflects auditory detection and discrimination. Subsequently, N2 indicates attention allocation and phonological analysis. The simultaneous analysis of N1 and N2 elicited by feasible novelty experimental paradigms, such as auditory oddball, seems an objective method to assess central auditory processing. The aim of this systematic review was to bring forward normative values for auditory oddball N1 and N2 components across age. EBSCO, PubMed, Web of Knowledge and Google Scholarwere systematically searched for studies that elicited N1 and/or N2 by auditory oddball paradigm. A total of 2,764 papers were initially identified in the database, of which 19 resulted from hand search and additional references, between 1988 and 2013, last 25 years. A final total of 68 studiesmet the eligibility criteria with a total of 2,406 participants from control groups for N1 (age range 6.6–85 years; mean 34.42) and 1,507 for N2 (age range 9–85 years; mean 36.13). Polynomial regression analysis revealed thatN1latency decreases with aging at Fz and Cz,N1 amplitude at Cz decreases from childhood to adolescence and stabilizes after 30–40 years and at Fz the decrement finishes by 60 years and highly increases after this age. Regarding N2, latency did not covary with age but amplitude showed a significant decrement for both Cz and Fz. Results suggested reliable normative values for Cz and Fz electrode locations; however, changes in brain development and components topography over age should be considered in clinical practice.