940 resultados para visitor information, network services, data collecting, data analysis, statistics, locating
Resumo:
This data article is referred to the research article entitled The role of ascorbate peroxidase, guaiacol peroxidase, and polysaccharides in cassava (Manihot esculenta Crantz) roots under postharvest physiological deterioration by Uarrota et al. (2015). Food Chemistry 197, Part A, 737746. The stress duo to PPD of cassava roots leads to the formation of ROS which are extremely harmful and accelerates cassava spoiling. To prevent or alleviate injuries from ROS, plants have evolved antioxidant systems that include non-enzymatic and enzymatic defence systems such as ascorbate peroxidase, guaiacol peroxidase and polysaccharides. In this data article can be found a dataset called newdata, in RData format, with 60 observations and 06 variables. The first 02 variables (Samples and Cultivars) and the last 04, spectrophotometric data of ascorbate peroxidase, guaiacol peroxidase, tocopherol, total proteins and arcsined data of cassava PPD scoring. For further interpretation and analysis in R software, a report is also provided. Means of all variables and standard deviations are also provided in the Supplementary tables (data.long3.RData, data.long4.RData and meansEnzymes.RData), raw data of PPD scoring without transformation (PPDmeans.RData) and days of storage (days.RData) are also provided for data analysis reproducibility in R software.
Resumo:
Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação
Resumo:
We study the problem of privacy-preserving proofs on authenticated data, where a party receives data from a trusted source and is requested to prove computations over the data to third parties in a correct and private way, i.e., the third party learns no information on the data but is still assured that the claimed proof is valid. Our work particularly focuses on the challenging requirement that the third party should be able to verify the validity with respect to the specific data authenticated by the source — even without having access to that source. This problem is motivated by various scenarios emerging from several application areas such as wearable computing, smart metering, or general business-to-business interactions. Furthermore, these applications also demand any meaningful solution to satisfy additional properties related to usability and scalability. In this paper, we formalize the above three-party model, discuss concrete application scenarios, and then we design, build, and evaluate ADSNARK, a nearly practical system for proving arbitrary computations over authenticated data in a privacy-preserving manner. ADSNARK improves significantly over state-of-the-art solutions for this model. For instance, compared to corresponding solutions based on Pinocchio (Oakland’13), ADSNARK achieves up to 25× improvement in proof-computation time and a 20× reduction in prover storage space.
Resumo:
The present paper analyses the link between firms’ decisions to innovate and the barriers that prevent them from being innovative. The aim is twofold. First, it analyses three groups of barriers to innovation: the cost of innovation projects, lack of knowledge and market conditions. Second, it presents the main steps taken by Catalan Government to promote the creation of new firms and to reduce barriers to innovation. The data set used is based on the 2004 official innovation survey of Catalonia which was taken from the Spanish CIS-4 sample. This sample includes individual information on 2,954 Catalan firms in manufacturing industries and knowledge-intensive services (KIS). The empirical analysis reveals pronounced differences regarding a firm’s propensity to innovate and its perception of barriers. Moreover, the results show that cost and knowledge barriers seem to be the most important and that there are substantial sectoral differences in the way that firms react to barriers. The results of this paper have important implications for the design of future public policy to promote entrepreneurship and innovation together.
Resumo:
This paper examines the relationship between the level of public infrastructure and the level of productivity using panel data for the Spanish provinces over the period 1984-2004, a period which is particularly relevant due to the substantial changes occurring in the Spanish economy at that time. The underlying model used for the data analysis is based on the wage equation, which is one of a handful of simultaneous equations which when satisfied correspond to the short-run equilibrium of New Economic Geography theory. This is estimated using a spatial panel model with fixed time and province effects, so that unmodelled space and time constant sources of heterogeneity are eliminated. The model assumes that productivity depends on the level of educational attainment and the public capital stock endowment of each province. The results show that although changes in productivity are positively associated with changes in public investment within the same province, there is a negative relationship between productivity changes and changes in public investment in other regions.
Resumo:
The paper investigates the role of real exchange rate misalignment on long-run growth for a set of ninety countries using time series data from 1980 to 2004. We first estimate a panel data model (using fixed and random effects) for the real exchange rate, with different model specifications, in order to produce estimates of the equilibrium real exchange rate and this is then used to construct measures of real exchange rate misalignment. We also provide an alternative set of estimates of real exchange rate misalignment using panel cointegration methods. The variables used in our real exchange rate models are: real per capita GDP; net foreign assets; terms of trade and government consumption. The results for the two-step System GMM panel growth models indicate that the coefficients for real exchange rate misalignment are positive for different model specification and samples, which means that a more depreciated (appreciated) real exchange rate helps (harms) long-run growth. The estimated coefficients are higher for developing and emerging countries.
Resumo:
This paper investigates the role of institutions in determining per capita income levels and growth. It contributes to the empirical literature by using different variables as proxies for institutions and by developing a deeper analysis of the issues arising from the use of weak and too many instruments in per capita income and growth regressions. The cross-section estimation suggests that institutions seem to matter, regardless if they are the only explanatory variable or are combined with geographical and integration variables, although most models suffer from the issue of weak instruments. The results from the growth models provides some interesting results: there is mixed evidence on the role of institutions and such evidence is more likely to be associated with law and order and investment profile; government spending is an important policy variable; collapsing the number of instruments results in fewer significant coefficients for institutions.
Resumo:
Data analysis, presentation and distribution is of utmost importance to a genome project. A public domain software, ACeDB, has been chosen as the common basis for parasite genome databases, and a first release of TcruziDB, the Trypanosoma cruzi genome database, is available by ftp from ftp://iris.dbbm.fiocruz.br/pub/genomedb/TcruziDB as well as versions of the software for different operating systems (ftp://iris.dbbm.fiocruz.br/pub/unixsoft/). Moreover, data originated from the project are available from the WWW server at http://www.dbbm.fiocruz.br. It contains biological and parasitological data on CL Brener, its karyotype, all available T. cruzi sequences from Genbank, data on the EST-sequencing project and on available libraries, a T. cruzi codon table and a listing of activities and participating groups in the genome project, as well as meeting reports. T. cruzi discussion lists (tcruzi-l@iris.dbbm.fiocruz.br and tcgenics@iris.dbbm.fiocruz.br) are being maintained for communication and to promote collaboration in the genome project
Resumo:
Within the framework of a retrospective study of the incidence of hip fractures in the canton of Vaud (Switzerland), all cases of hip fracture occurring among the resident population in 1986 and treated in the hospitals of the canton were identified from among five different information sources. Relevant data were then extracted from the medical records. At least two sources of information were used to identify cases in each hospital, among them the statistics of the Swiss Hospital Association (VESKA). These statistics were available for 9 of the 18 hospitals in the canton that participated in the study. The number of cases identified from the VESKA statistics was compared to the total number of cases for each hospital. For the 9 hospitals the number of cases in the VESKA statistics was 407, whereas, after having excluded diagnoses that were actually "status after fracture" and double entries, the total for these hospitals was 392, that is 4% less than the VESKA statistics indicate. It is concluded that the VESKA statistics provide a good approximation of the actual number of cases treated in these hospitals, with a tendency to overestimate this number. In order to use these statistics for calculating incidence figures, however, it is imperative that a greater proportion of all hospitals (50% presently in the canton, 35% nationwide) participate in these statistics.
Resumo:
The Institute of Public Health in Ireland is an all-island body which aims to improve health in Ireland by working to combat health inequalities and influence public policies in favour of health. The Institute promotes North-South co-operation in research, training, information and policy. The Institute commends the Department of Health and Children for producing the Discussion Paper on Proposed Health Information Bill (June 2008) and welcomes the opportunity to comment on it. The first objective of the Health Information: A National Strategy (2004) is to support the implementation of Quality and Fairness: A Health System for You (2001).The National Health Goals - such as ‘Better health for everyone’, ‘Fair access’ and ‘Responsive and appropriate care delivery’ - are expressed in terms of the health of the public as well as patients. The Discussion Paper focuses on personal information, and the data flows within the health system, that are needed to enhance medical care and maximise patient safety. The Institute believes that the Health Information Bill should also aim to more fully support the achievement of the National Health Goals and the public health function. This requires the development of more integrated information systems that link the healthcare sector and other sectors. Assessment of health services performance - in terms of the public’s health, health inequalities and achievement of the National Health Goals - require such information systems. They will enable the construction of public health key performance indicators for the healthcare services.
Resumo:
This paper presents general problems and approaches for the spatial data analysis using machine learning algorithms. Machine learning is a very powerful approach to adaptive data analysis, modelling and visualisation. The key feature of the machine learning algorithms is that they learn from empirical data and can be used in cases when the modelled environmental phenomena are hidden, nonlinear, noisy and highly variable in space and in time. Most of the machines learning algorithms are universal and adaptive modelling tools developed to solve basic problems of learning from data: classification/pattern recognition, regression/mapping and probability density modelling. In the present report some of the widely used machine learning algorithms, namely artificial neural networks (ANN) of different architectures and Support Vector Machines (SVM), are adapted to the problems of the analysis and modelling of geo-spatial data. Machine learning algorithms have an important advantage over traditional models of spatial statistics when problems are considered in a high dimensional geo-feature spaces, when the dimension of space exceeds 5. Such features are usually generated, for example, from digital elevation models, remote sensing images, etc. An important extension of models concerns considering of real space constrains like geomorphology, networks, and other natural structures. Recent developments in semi-supervised learning can improve modelling of environmental phenomena taking into account on geo-manifolds. An important part of the study deals with the analysis of relevant variables and models' inputs. This problem is approached by using different feature selection/feature extraction nonlinear tools. To demonstrate the application of machine learning algorithms several interesting case studies are considered: digital soil mapping using SVM, automatic mapping of soil and water system pollution using ANN; natural hazards risk analysis (avalanches, landslides), assessments of renewable resources (wind fields) with SVM and ANN models, etc. The dimensionality of spaces considered varies from 2 to more than 30. Figures 1, 2, 3 demonstrate some results of the studies and their outputs. Finally, the results of environmental mapping are discussed and compared with traditional models of geostatistics.
Resumo:
The geographic information system approach has permitted integration between demographic, socio-economic and environmental data, providing correlation between information from several data banks. In the current work, occurrence of human and canine visceral leishmaniases and insect vectors (Lutzomyia longipalpis) as well as biogeographic information related to 9 areas that comprise the city of Belo Horizonte, Brazil, between April 2001 and March 2002 were correlated and georeferenced. By using this technique it was possible to define concentration loci of canine leishmaniasis in the following regions: East; Northeast; Northwest; West; and Venda Nova. However, as for human leishmaniasis, it was not possible to perform the same analysis. Data analysis has also shown that 84.2% of the human leishmaniasis cases were related with canine leishmaniasis cases. Concerning biogeographic (altitude, area of vegetation influence, hydrographic, and areas of poverty) analysis, only altitude showed to influence emergence of leishmaniasis cases. A number of 4673 canine leishmaniasis cases and 64 human leishmaniasis cases were georeferenced, of which 67.5 and 71.9%, respectively, were living between 780 and 880 m above the sea level. At these same altitudes, a large number of phlebotomine sand flies were collected. Therefore, we suggest control measures for leishmaniasis in the city of Belo Horizonte, giving priority to canine leishmaniasis foci and regions at altitudes between 780 and 880 m.
Resumo:
In recent years, analysis of the genomes of many organisms has received increasing international attention. The bulk of the effort to date has centred on the Human Genome Project and analysis of model organisms such as yeast, Drosophila and Caenorhabditis elegans. More recently, the revolution in genome sequencing and gene identification has begun to impact on infectious disease organisms. Initially, much of the effort was concentrated on prokaryotes, but small eukaryotic genomes, including the protozoan parasites Plasmodium, Toxoplasma and trypanosomatids (Leishmania, Trypanosoma brucei and T. cruzi), as well as some multicellular organisms, such as Brugia and Schistosoma, are benefiting from the technological advances of the genome era. These advances promise a radical new approach to the development of novel diagnostic tools, chemotherapeutic targets and vaccines for infectious disease organisms, as well as to the more detailed analysis of cell biology and function.Several networks or consortia linking laboratories around the world have been established to support these parasite genome projects[1] (for more information, see http://www.ebi.ac.uk/ parasites/paratable.html). Five of these networks were supported by an initiative launched in 1994 by the Specific Programme for Research and Tropical Diseases (TDR) of the WHO[2, 3, 4, 5, 6]. The Leishmania Genome Network (LGN) is one of these[3]. Its activities are reported at http://www.ebi.ac.uk/parasites/leish.html, and its current aim is to map and sequence the genome of Leishmania by the year 2002. All the mapping, hybridization and sequence data are also publicly available from LeishDB, an AceDB-based genome database (http://www.ebi.ac.uk/parasites/LGN/leissssoft.html).
Resumo:
In this paper we look at how a web-based social software can be used to make qualitative data analysis of online peer-to-peer learning experiences. Specifically, we propose to use Cohere, a web-based social sense-making tool, to observe, track, annotate and visualize discussion group activities in online courses. We define a specific methodology for data observation and structuring, and present results of the analysis of peer interactions conducted in discussion forum in a real case study of a P2PU course. Finally we discuss how network visualization and analysis can be used to gather a better understanding of the peer-to-peer learning experience. To do so, we provide preliminary insights on the social, dialogical and conceptual connections that have been generated within one online discussion group.
Resumo:
Planners in public and private institutions would like coherent forecasts of the components of age-specic mortality, such as causes of death. This has been di cult toachieve because the relative values of the forecast components often fail to behave ina way that is coherent with historical experience. In addition, when the group forecasts are combined the result is often incompatible with an all-groups forecast. It hasbeen shown that cause-specic mortality forecasts are pessimistic when compared withall-cause forecasts (Wilmoth, 1995). This paper abandons the conventional approachof using log mortality rates and forecasts the density of deaths in the life table. Sincethese values obey a unit sum constraint for both conventional single-decrement life tables (only one absorbing state) and multiple-decrement tables (more than one absorbingstate), they are intrinsically relative rather than absolute values across decrements aswell as ages. Using the methods of Compositional Data Analysis pioneered by Aitchison(1986), death densities are transformed into the real space so that the full range of multivariate statistics can be applied, then back-transformed to positive values so that theunit sum constraint is honoured. The structure of the best-known, single-decrementmortality-rate forecasting model, devised by Lee and Carter (1992), is expressed incompositional form and the results from the two models are compared. The compositional model is extended to a multiple-decrement form and used to forecast mortalityby cause of death for Japan