930 resultados para Input-output data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: Handwriting is a modality of language production whose cerebral substrates remain poorly known although the existence of specific regions is postulated. The description of brain damaged patients with agraphia and, more recently, several neuroimaging studies suggest the involvement of different brain regions. However, results vary with the methodological choices made and may not always discriminate between "writing-specific" and motor or linguistic processes shared with other abilities. METHODS: We used the "Activation Likelihood Estimate" (ALE) meta-analytical method to identify the cerebral network of areas commonly activated during handwriting in 18 neuroimaging studies published in the literature. Included contrasts were also classified according to the control tasks used, whether non-specific motor/output-control or linguistic/input-control. These data were included in two secondary meta-analyses in order to reveal the functional role of the different areas of this network. RESULTS: An extensive, mainly left-hemisphere network of 12 cortical and sub-cortical areas was obtained; three of which were considered as primarily writing-specific (left superior frontal sulcus/middle frontal gyrus area, left intraparietal sulcus/superior parietal area, right cerebellum) while others related rather to non-specific motor (primary motor and sensorimotor cortex, supplementary motor area, thalamus and putamen) or linguistic processes (ventral premotor cortex, posterior/inferior temporal cortex). CONCLUSIONS: This meta-analysis provides a description of the cerebral network of handwriting as revealed by various types of neuroimaging experiments and confirms the crucial involvement of the left frontal and superior parietal regions. These findings provide new insights into cognitive processes involved in handwriting and their cerebral substrates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A conceptual framework for crop production efficiency was derived using thermodynamic efficiency concept, in order to generate a tool for performance evaluation of agricultural systems and to quantify the interference of determining factors on this performance. In Thermodynamics, efficiency is the ratio between the output and input of energy. To establish this relationship in agricultural systems, it was assumed that the input energy is represented by the attainable crop yield, as predicted through simulation models based on environmental variables. The method of FAO's agroecological zones was applied to the assessment of the attainable sugarcane yield, while Instituto Brasileiro de Geografia e Estatística (IBGE) data were used as observed yield. Sugarcane efficiency production in São Paulo state was evaluated in two growing seasons, and its correlation with some physical factors that regulate production was calculated. A strong relationship was identified between crop production efficiency and soil aptitude. This allowed inferring the effect of agribusiness factors on crop production efficiency. The relationships between production efficiency and climatic variables were also quantified and indicated that solar radiation, annual rainfall, water deficiency, and maximum air temperature are the main factors affecting the sugarcane production efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A comment about the article “Local sensitivity analysis for compositional data with application to soil texture in hydrologic modelling” writen by L. Loosvelt and co-authors. The present comment is centered in three specific points. The first one is related to the fact that the authors avoid the use of ilr-coordinates. The second one refers to some generalization of sensitivity analysis when input parameters are compositional. The third tries to show that the role of the Dirichlet distribution in the sensitivity analysis is irrelevant

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper deals with the design of nonregenerativerelaying transceivers in cooperative systems where channel stateinformation (CSI) is available at the relay station. The conventionalnonregenerative approach is the amplify and forward(A&F) approach, where the signal received at the relay is simplyamplified and retransmitted. In this paper, we propose an alternativelinear transceiver design for nonregenerative relaying(including pure relaying and the cooperative transmission cases),making proper use of CSI at the relay station. Specifically, wedesign the optimum linear filtering performed on the data to beforwarded at the relay. As optimization criteria, we have consideredthe maximization of mutual information (that provides aninformation rate for which reliable communication is possible) fora given available transmission power at the relay station. Threedifferent levels of CSI can be considered at the relay station: onlyfirst hop channel information (between the source and relay);first hop channel and second hop channel (between relay anddestination) information, or a third situation where the relaymay have complete cooperative channel information includingall the links: first and second hop channels and also the directchannel between source and destination. Despite the latter beinga more unrealistic situation, since it requires the destination toinform the relay station about the direct channel, it is useful as anupper benchmark. In this paper, we consider the last two casesrelating to CSI.We compare the performance so obtained with theperformance for the conventional A&F approach, and also withthe performance of regenerative relays and direct noncooperativetransmission for two particular cases: narrowband multiple-inputmultiple-output transceivers and wideband single input singleoutput orthogonal frequency division multiplex transmissions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissolved organic matter (DOM) is a complex mixture of organic compounds, ubiquitous in marine and freshwater systems. Fluorescence spectroscopy, by means of Excitation-Emission Matrices (EEM), has become an indispensable tool to study DOM sources, transport and fate in aquatic ecosystems. However the statistical treatment of large and heterogeneous EEM data sets still represents an important challenge for biogeochemists. Recently, Self-Organising Maps (SOM) has been proposed as a tool to explore patterns in large EEM data sets. SOM is a pattern recognition method which clusterizes and reduces the dimensionality of input EEMs without relying on any assumption about the data structure. In this paper, we show how SOM, coupled with a correlation analysis of the component planes, can be used both to explore patterns among samples, as well as to identify individual fluorescence components. We analysed a large and heterogeneous EEM data set, including samples from a river catchment collected under a range of hydrological conditions, along a 60-km downstream gradient, and under the influence of different degrees of anthropogenic impact. According to our results, chemical industry effluents appeared to have unique and distinctive spectral characteristics. On the other hand, river samples collected under flash flood conditions showed homogeneous EEM shapes. The correlation analysis of the component planes suggested the presence of four fluorescence components, consistent with DOM components previously described in the literature. A remarkable strength of this methodology was that outlier samples appeared naturally integrated in the analysis. We conclude that SOM coupled with a correlation analysis procedure is a promising tool for studying large and heterogeneous EEM data sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research of power-line communications has been concentrated on home automation, broadband indoor communications and broadband data transfer in a low voltage distribution network between home andtransformer station. There has not been carried out much research work that is focused on the high frequency characteristics of industrial low voltage distribution networks. The industrial low voltage distribution network may be utilised as a communication channel to data transfer required by the on-line condition monitoring of electric motors. The advantage of using power-line data transfer is that it does not require the installing of new cables. In the first part of this work, the characteristics of industrial low voltage distribution network components and the pilot distribution network are measured and modelled with respect topower-line communications frequencies up to 30 MHz. The distributed inductances, capacitances and attenuation of MCMK type low voltage power cables are measured in the frequency band 100 kHz - 30 MHz and an attenuation formula for the cables is formed based on the measurements. The input impedances of electric motors (15-250 kW) are measured using several signal couplings and measurement based input impedance model for electric motor with a slotted stator is formed. The model is designed for the frequency band 10 kHz - 30 MHz. Next, the effect of DC (direct current) voltage link inverter on power line data transfer is briefly analysed. Finally, a pilot distribution network is formed and signal attenuation in communication channels in the pilot environment is measured. The results are compared with the simulations that are carried out utilising the developed models and measured parameters for cables and motors. In the second part of this work, a narrowband power-line data transfer system is developed for the data transfer ofon-line condition monitoring of electric motors. It is developed using standardintegrated circuits. The system is tested in the pilot environment and the applicability of the system for the data transfer required by the on-line condition monitoring of electric motors is analysed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Työn tarkoituksena oli toteuttaa moottoriajoneuvon suorituskyvyn mittaukseen käytettävä järjestelmä. Järjestelmä koostuu sylinterin muotoisesta rullasta ja tiedonkeruujärjestelmästä. Rullaa, jonka hitausmomentti tunnetaan kiihdytetään ajoneuvon vetopyörien välityksellä ja mitatuista arvoista lasketaan teho ja vääntömomenttiarvot moottorin kierrosluvun funktiona. Tiedonkeruu tapahtuu PC-mikrotietokoneen avulla, johon on liitetty tiedonkeruukortti. PC-mikrotietokone muodostaa käyttöliittymän, jonka avulla saadut tulokset esitetään kuvaajien avulla käyttäjälle. Käyttöliittymän avulla suoritetaan myös tulosten talletus ja raportin tulostus. Teoriaosassa tarkastellaan suorituskyvyn mittaamiseen käytettyjä menetelmiä ja laitteistoja, sekä tiedonkeruujärjestelmän rakennetta ja sen valintaan vaikuttavia tekijöitä. Käytännön osassa suunnitellaan muokkainkortti, jonka avulla erilaisilta antureilta saadut signaalit voidaan sovittaa tiedonkeruukortin tuloalueelle sopiviksi. Myös käyttöliittymän toimintaa ja sen rakentamiseen käytettyjä työkaluja tarkastellaan.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flood simulation studies use spatial-temporal rainfall data input into distributed hydrological models. A correct description of rainfall in space and in time contributes to improvements on hydrological modelling and design. This work is focused on the analysis of 2-D convective structures (rain cells), whose contribution is especially significant in most flood events. The objective of this paper is to provide statistical descriptors and distribution functions for convective structure characteristics of precipitation systems producing floods in Catalonia (NE Spain). To achieve this purpose heavy rainfall events recorded between 1996 and 2000 have been analysed. By means of weather radar, and applying 2-D radar algorithms a distinction between convective and stratiform precipitation is made. These data are introduced and analyzed with a GIS. In a first step different groups of connected pixels with convective precipitation are identified. Only convective structures with an area greater than 32 km2 are selected. Then, geometric characteristics (area, perimeter, orientation and dimensions of the ellipse), and rainfall statistics (maximum, mean, minimum, range, standard deviation, and sum) of these structures are obtained and stored in a database. Finally, descriptive statistics for selected characteristics are calculated and statistical distributions are fitted to the observed frequency distributions. Statistical analyses reveal that the Generalized Pareto distribution for the area and the Generalized Extreme Value distribution for the perimeter, dimensions, orientation and mean areal precipitation are the statistical distributions that best fit the observed ones of these parameters. The statistical descriptors and the probability distribution functions obtained are of direct use as an input in spatial rainfall generators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissolved organic matter (DOM) is a complex mixture of organic compounds, ubiquitous in marine and freshwater systems. Fluorescence spectroscopy, by means of Excitation-Emission Matrices (EEM), has become an indispensable tool to study DOM sources, transport and fate in aquatic ecosystems. However the statistical treatment of large and heterogeneous EEM data sets still represents an important challenge for biogeochemists. Recently, Self-Organising Maps (SOM) has been proposed as a tool to explore patterns in large EEM data sets. SOM is a pattern recognition method which clusterizes and reduces the dimensionality of input EEMs without relying on any assumption about the data structure. In this paper, we show how SOM, coupled with a correlation analysis of the component planes, can be used both to explore patterns among samples, as well as to identify individual fluorescence components. We analysed a large and heterogeneous EEM data set, including samples from a river catchment collected under a range of hydrological conditions, along a 60-km downstream gradient, and under the influence of different degrees of anthropogenic impact. According to our results, chemical industry effluents appeared to have unique and distinctive spectral characteristics. On the other hand, river samples collected under flash flood conditions showed homogeneous EEM shapes. The correlation analysis of the component planes suggested the presence of four fluorescence components, consistent with DOM components previously described in the literature. A remarkable strength of this methodology was that outlier samples appeared naturally integrated in the analysis. We conclude that SOM coupled with a correlation analysis procedure is a promising tool for studying large and heterogeneous EEM data sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Fortran77 program, SSPBE, designed to solve the spherically symmetric Poisson-Boltzmann equation using cell model for ionic macromolecular aggregates or macroions is presented. The program includes an adsorption model for ions at the aggregate surface. The working algorithm solves the Poisson-Boltzmann equation in the integral representation using the Picard iteration method. Input parameters are introduced via an ASCII file, sspbe.txt. Output files yield the radial distances versus mean field potentials and average molar ion concentrations, the molar concentration of ions at the cell boundary, the self-consistent degree of ion adsorption from the surface and other related data. Ion binding to ionic, zwitterionic and reverse micelles are presented as representative examples of the applications of the SSPBE program.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Catalan Research Portal (Portal de la Recerca de Catalunya or PRC) is an initiative carried out by the Consortium for University Services in Catalonia (CSUC) in coordination with nearly all universities in Catalonia. The Portal will provide an online CERIF-compliant collection of all research outputs produced by Catalan HEIs together with an appropriate contextual information describing the specific environment where the output was generated (such as researchers, research group, research project, etc). The initial emphasis of the Catalan Research Portal approach to research outputs will be made on publications, but other outputs such as patents and eventually research data will eventually be addressed as well. These guidelines provide information for PRC data providers to expose and exchange their research information metadata in CERIFXML compatible structure, thus allowing them not just to exchange validated CERIF XML data with the PRC platform, but to improve their general interoperability by being able to deliver CERIFcompatible outputs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work describes, through examples, a simple way to carry out experimental design calculations applying an spreadsheets. The aim of this tutorial is to introduce an alternative to sophisticated commercial programs that normally are too complex in data input and output. An overview of the principal methods is also briefly presented. The spreadsheets are suitable to handle different types of computations such as screening procedures applying factorial design and the optimization procedure based on response surface methodology. Furthermore, the spreadsheets are sufficiently versatile to be adapted to specific experimental designs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sähköinen asiointi on yleistynyt viime vuosien aikana nopeasti. Toimivia julkishallinnon sähköisiä palveluita kehitetään jatkuvasti yhä uusiin tarkoituksiin. Tällä hetkellä sähköisten palveluiden tarjontaa leimaa hajanaisuus. Jokainen palveluita tarjoava organisaatio kehittää palveluita itsenäisesti, omalla tavallaan. Palveluiden kehittäminen asiakaslähtöisesti edellyttää palvelukokonaisuuksien suunnittelua arkkitehtuuritasolla. Yksittäisten sovellusten kehittämisen lisäksi on huomioitava näiden sovellusten toimivuus yhteen, osana suurempaa kokonaisuutta. Palvelukeskeisessä arkkitehtuurisuunnittelussa lähtökohta on se, että jokainen yksittäinen sovellus on itsessään palvelu, joka tuottaa määritelmänsä mukaisen tulosteen sellaisessa muodossa, jota muut sovellukset kykenevät ymmärtämään. Työn tuloksena havaittiin, että vaikka sähköiselle hallinnolle on kehitetty useita viitearkkitehtuureja, on käytännön työ palveluiden integroimiseksi vielä kesken. Yksittäisten palveluiden liittäminen suuremmiksi kokonaisuuksiksi vaatii määrätietoista arkkitehtuurisuunnittelua sekä kansallista suunnittelun koordinointia. Tiedon jakaminen organisaatiorajojen yli muodostaa joukon kysymyksiä, joihin ei ole vielä vastausta. Tietosuojaa ja yksityisyyttä ei voida uhrata sähköistä hallintoa suunniteltaessa.