851 resultados para statistical methods
Resumo:
Kävelykadut ovat tunnustettu tapa elävöittää keskusta-alueiden kauppaa. Aluksi moni kauppias epäilee kävelykadun tuomia muutoksia, mutta kokemus osoittaa, että kävelykadut ovat olleet menestyksekkäitä ja nostavat siellä olevien yritysten myyntiä. Jotkut yritykset eivät kuitenkin hyödy kävelykaduista, kun taas toiset hyötyvät paljon kun katu muuttuu kävelykaduksi. Tämä pro gradu -tutkielma tutkii kävelykatujen kaupallista rakennetta, jotta saataisiin selville minkätyyppiset yritykset löytyvät kävelykadulta. Tuloksia verrataan sen kaupallisen keskusvyöhykkeen kaupalliseen rakenteeseen missä kävelykatu sijaitsee. Näin saadaan selville erot kaupallisessa rakenteessa. Pro gradu tutkii myös miten tavallisia ketjuyritykset ovat kävelykaduilla ja kaupallisissa keskusvyöhykkeissä. Tutkimusaineisto koottiin kaupallisen inventoinnin avulla, joka suoritettiin kolmessa suomalaisessa kaupungissa: Tammisaaressa, Keravalla ja Porissa. Saatu aineisto luokiteltiin ja tulokset piirrettiin kartalle. Perustilastollisia menetelmiä käytettiin tulosten analysoimisessa. Tulokset eriteltiin kävelykadun, kauppakeskusten ja muiden paikkojen osalta ja luokiteltiin yleisluokkiin vähittäiskauppa, ravintola ja muu palvelu. Tulokset näyttävät, että on olemassa selkeitä eroja kun vertaa kävelykatuja ja kaupallisia keskusvyöhykkeitä. Kävelykaduilla on paljon enemmän vähittäiskauppoja, etenkin muotikauppoja, kuin muilla kaduilla. Kauppakeskuksilla on samantapainen kaupallinen rakenne kuin kävelykaduilla kun taas muilla kaduilla esiintyy vähemmän vähittäiskauppoja ja enemmän palveluyrityksiä. Ravintolat ovat melkein yhtä tavallisia koko kaupallisessa keskusvyöhykkeessä. Ketjuyritysten osalta tulokset ovat epäselviä. On olemassa osviittaa siitä, että ne ovat tavallisempia kävelykaduilla, etenkin suurissa kaupungeissa. Saatua tulosta ei ole kuitenkin tarpeeksi, jotta varmaa tietoa olisi saatu. Viimeisten 10–15 vuoden ajan Suomen kävelykadut ovat muuttuneet enemmän ravintolavaltaisiksi muiden palveluiden kustannuksella. Vähittäiskauppojen määrä on pysynyt vakaana. Suomalaiset kävelykadut eroavat kaupalliselta rakenteeltaan pohjoismaisista kävelykaduista, joilla on enemmän vähittäiskauppoja ja vähemmän palveluyrityksiä. Tapauskohtaisissa tuloksissa esiintyy paljon eroavaisuuksia. Paikalliset tekijät ovat usein voimakkaampia kuin yleiset teoriat kauppojen sijainnista kävelykaduilla. Yleisesti ottaen tulokset tukevat teoreettista viitekehystä. Tulokset antavat tarkempaa tietoa kävelykatujen ja kaupallisten keskusvyöhykkeiden kaupallisesta rakenteesta ja siitä, mitkä tekijät tähän vaikuttaa.
Resumo:
Current scientific research is characterized by increasing specialization, accumulating knowledge at a high speed due to parallel advances in a multitude of sub-disciplines. Recent estimates suggest that human knowledge doubles every two to three years – and with the advances in information and communication technologies, this wide body of scientific knowledge is available to anyone, anywhere, anytime. This may also be referred to as ambient intelligence – an environment characterized by plentiful and available knowledge. The bottleneck in utilizing this knowledge for specific applications is not accessing but assimilating the information and transforming it to suit the needs for a specific application. The increasingly specialized areas of scientific research often have the common goal of converting data into insight allowing the identification of solutions to scientific problems. Due to this common goal, there are strong parallels between different areas of applications that can be exploited and used to cross-fertilize different disciplines. For example, the same fundamental statistical methods are used extensively in speech and language processing, in materials science applications, in visual processing and in biomedicine. Each sub-discipline has found its own specialized methodologies making these statistical methods successful to the given application. The unification of specialized areas is possible because many different problems can share strong analogies, making the theories developed for one problem applicable to other areas of research. It is the goal of this paper to demonstrate the utility of merging two disparate areas of applications to advance scientific research. The merging process requires cross-disciplinary collaboration to allow maximal exploitation of advances in one sub-discipline for that of another. We will demonstrate this general concept with the specific example of merging language technologies and computational biology.
Resumo:
Empirical research available on technology transfer initiatives is either North American or European. Literature over the last two decades shows various research objectives such as identifying the variables to be measured and statistical methods to be used in the context of studying university based technology transfer initiatives. AUTM survey data from years 1996 to 2008 provides insightful patterns about the North American technology transfer initiatives, we use this data in our paper. This paper has three sections namely, a comparison of North American Universities with (n=1129) and without Medical Schools (n=786), an analysis of the top 75th percentile of these samples and a DEA analysis of these samples. We use 20 variables. Researchers have attempted to classify university based technology transfer initiative variables into multi-stages, namely, disclosures, patents and license agreements. Using the same approach, however with minor variations, three stages are defined in this paper. The first stage is to do with inputs from R&D expenditure and outputs namely, invention disclosures. The second stage is to do with invention disclosures being the input and patents issued being the output. The third stage is to do with patents issued as an input and technology transfers as outcomes.
Resumo:
This study aimed to assess soil nutrient status and heavy metal content and their impact on the predominant soil bacterial communities of mangroves of the Mahanadi Delta. Mangrove soil of the Mahanadi Delta is slightly acidic and the levels of soil nutrients such as carbon, nitrogen, phosphorous and potash vary with season and site. The seasonal average concentrations (g/g) of various heavy metals were in the range: 14810-63370 (Fe), 2.8-32.6 (Cu), 13.4-55.7 (Ni), 1.8-7.9 (Cd), 16.6-54.7 (Pb), 24.4-132.5 (Zn) and 13.3-48.2 (Co). Among the different heavy metals analysed, Co, Cu and Cd were above their permissible limits, as prescribed by Indian Standards (Co=17g/g, Cu=30 g/g, Cd=3-6 g/g), indicating pollution in the mangrove soil. A viable plate count revealed the presence of different groups of bacteria in the mangrove soil, i.e. heterotrophs, free-living N-2 fixers, nitrifyers, denitrifyers, phosphate solubilisers, cellulose degraders and sulfur oxidisers. Principal component analysis performed using multivariate statistical methods showed a positive relationship between soil nutrients and microbial load. Whereas metal content such as Cu, Co and Ni showed a negative impact on some of the studied soil bacteria.
Resumo:
Traditional taxonomy based on morphology has often failed in accurate species identification owing to the occurrence of cryptic species, which are reproductively isolated but morphologically identical. Molecular data have thus been used to complement morphology in species identification. The sexual advertisement calls in several groups of acoustically communicating animals are species-specific and can thus complement molecular data as non-invasive tools for identification. Several statistical tools and automated identifier algorithms have been used to investigate the efficiency of acoustic signals in species identification. Despite a plethora of such methods, there is a general lack of knowledge regarding the appropriate usage of these methods in specific taxa. In this study, we investigated the performance of two commonly used statistical methods, discriminant function analysis (DFA) and cluster analysis, in identification and classification based on acoustic signals of field cricket species belonging to the subfamily Gryllinae. Using a comparative approach we evaluated the optimal number of species and calling song characteristics for both the methods that lead to most accurate classification and identification. The accuracy of classification using DFA was high and was not affected by the number of taxa used. However, a constraint in using discriminant function analysis is the need for a priori classification of songs. Accuracy of classification using cluster analysis, which does not require a priori knowledge, was maximum for 6-7 taxa and decreased significantly when more than ten taxa were analysed together. We also investigated the efficacy of two novel derived acoustic features in improving the accuracy of identification. Our results show that DFA is a reliable statistical tool for species identification using acoustic signals. Our results also show that cluster analysis of acoustic signals in crickets works effectively for species classification and identification.
Resumo:
In China, the recent outbreak of novel influenza A/H7N9 virus has been assumed to be severe, and it may possibly turn brutal in the near future. In order to develop highly protective vaccines and drugs for the A/H7N9 virus, it is critical to find out the selection pressure of each amino acid site. In the present study, six different statistical methods consisting of four independent codon-based maximum likelihood (CML) methods, one hierarchical Bayesian (HB) method and one branch-site (BS) method, were employed to determine if each amino acid site of A/H7N9 virus is under natural selection pressure. Functions for both positively and negatively selected sites were inferred by annotating these sites with experimentally verified amino acid sites. Comprehensively, the single amino acid site 627 of PB2 protein was inferred as positively selected and it function was identified as a T-cell epitope (TCE). Among the 26 negatively selected amino acid sites of PB2, PB1, PA, HA, NP, NA, M1 and NS2 proteins, only 16 amino acid sites were identified to be involved in TCEs. In addition, 7 amino acid sites including, 608 and 609 of PA, 480 of NP, and 24, 25, 109 and 205 of M1, were identified to be involved in both B-cell epitopes (BCEs) and TCEs. Conversely, the function of positions 62 of PA, and, 43 and 113 of HA was unknown. In conclusion, the seven amino acid sites engaged in both BCEs and TCEs were identified as highly suitable targets, as these sites will be predicted to play a principal role in inducing strong humoral and cellular immune responses against A/H7N9 virus. (C) 2014 Elsevier Inc. All rights reserved.
Resumo:
Advances in forest carbon mapping have the potential to greatly reduce uncertainties in the global carbon budget and to facilitate effective emissions mitigation strategies such as REDD+ (Reducing Emissions from Deforestation and Forest Degradation). Though broad-scale mapping is based primarily on remote sensing data, the accuracy of resulting forest carbon stock estimates depends critically on the quality of field measurements and calibration procedures. The mismatch in spatial scales between field inventory plots and larger pixels of current and planned remote sensing products for forest biomass mapping is of particular concern, as it has the potential to introduce errors, especially if forest biomass shows strong local spatial variation. Here, we used 30 large (8-50 ha) globally distributed permanent forest plots to quantify the spatial variability in aboveground biomass density (AGBD in Mgha(-1)) at spatial scales ranging from 5 to 250m (0.025-6.25 ha), and to evaluate the implications of this variability for calibrating remote sensing products using simulated remote sensing footprints. We found that local spatial variability in AGBD is large for standard plot sizes, averaging 46.3% for replicate 0.1 ha subplots within a single large plot, and 16.6% for 1 ha subplots. AGBD showed weak spatial autocorrelation at distances of 20-400 m, with autocorrelation higher in sites with higher topographic variability and statistically significant in half of the sites. We further show that when field calibration plots are smaller than the remote sensing pixels, the high local spatial variability in AGBD leads to a substantial ``dilution'' bias in calibration parameters, a bias that cannot be removed with standard statistical methods. Our results suggest that topography should be explicitly accounted for in future sampling strategies and that much care must be taken in designing calibration schemes if remote sensing of forest carbon is to achieve its promise.
Resumo:
Social scientists have used agent-based models (ABMs) to explore the interaction and feedbacks among social agents and their environments. The bottom-up structure of ABMs enables simulation and investigation of complex systems and their emergent behaviour with a high level of detail; however the stochastic nature and potential combinations of parameters of such models create large non-linear multidimensional “big data,” which are difficult to analyze using traditional statistical methods. Our proposed project seeks to address this challenge by developing algorithms and web-based analysis and visualization tools that provide automated means of discovering complex relationships among variables. The tools will enable modellers to easily manage, analyze, visualize, and compare their output data, and will provide stakeholders, policy makers and the general public with intuitive web interfaces to explore, interact with and provide feedback on otherwise difficult-to-understand models.
Resumo:
High-resolution orbital and in situ observations acquired of the Martian surface during the past two decades provide the opportunity to study the rock record of Mars at an unprecedented level of detail. This dissertation consists of four studies whose common goal is to establish new standards for the quantitative analysis of visible and near-infrared data from the surface of Mars. Through the compilation of global image inventories, application of stratigraphic and sedimentologic statistical methods, and use of laboratory analogs, this dissertation provides insight into the history of past depositional and diagenetic processes on Mars. The first study presents a global inventory of stratified deposits observed in images from the High Resolution Image Science Experiment (HiRISE) camera on-board the Mars Reconnaissance Orbiter. This work uses the widespread coverage of high-resolution orbital images to make global-scale observations about the processes controlling sediment transport and deposition on Mars. The next chapter presents a study of bed thickness distributions in Martian sedimentary deposits, showing how statistical methods can be used to establish quantitative criteria for evaluating the depositional history of stratified deposits observed in orbital images. The third study tests the ability of spectral mixing models to obtain quantitative mineral abundances from near-infrared reflectance spectra of clay and sulfate mixtures in the laboratory for application to the analysis of orbital spectra of sedimentary deposits on Mars. The final study employs a statistical analysis of the size, shape, and distribution of nodules observed by the Mars Science Laboratory Curiosity rover team in the Sheepbed mudstone at Yellowknife Bay in Gale crater. This analysis is used to evaluate hypotheses for nodule formation and to gain insight into the diagenetic history of an ancient habitable environment on Mars.
Resumo:
O propósito desta Tese foi detectar e caracterizar áreas sob alto risco para leishmaniose visceral (LV) e descrever os padrões de ocorrência e difusão da doença, entre os anos de 1993 a 1996 e 2001 a 2006, em Teresina, Piauí, por meio de métodos estatísticos para análise de dados espaciais, sistemas de informações geográficas e imagens de sensoriamento remoto. Os resultados deste estudo são apresentados na forma de três manuscritos. O primeiro usou análise de dados espaciais para identificar as áreas com maior risco de LV na área urbana de Teresina entre 2001 e 2006. Os resultados utilizando razão de kernels demonstraram que as regiões periféricas da cidade foram mais fortemente afetadas ao longo do período analisado. A análise com indicadores locais de autocorrelação espacial mostrou que, no início do período de estudo, os agregados de alta incidência de LV localizavam-se principalmente na região sul e nordeste da cidade, mas nos anos seguintes os eles apareceram também na região norte da cidade, sugerindo que o padrão de ocorrência de LV não é estático e a doença pode se espalhar ocasionalmente para outras áreas do município. O segundo estudo teve como objetivo caracterizar e predizer territórios de alto risco para ocorrência da LV em Teresina, com base em indicadores socioeconômicos e dados ambientais, obtidos por sensoriamento remoto. Os resultados da classificação orientada a objeto apontam a expansão da área urbana para a periferia da cidade, onde antes havia maior cobertura de vegetação. O modelo desenvolvido foi capaz de discriminar 15 conjuntos de setores censitário (SC) com diferentes probabilidades de conterem SC com alto risco de ocorrência de LV. O subconjunto com maior probabilidade de conter SC com alto risco de LV (92%) englobou SC com percentual de chefes de família alfabetizados menor que a mediana (≤64,2%), com maior área coberta por vegetação densa, com percentual de até 3 moradores por domicílio acima do terceiro quartil (>31,6%). O modelo apresentou, respectivamente, na amostra de treinamento e validação, sensibilidade de 79% e 54%, especificidade de 74% e 71%, acurácia global de 75% e 67% e área sob a curva ROC de 83% e 66%. O terceiro manuscrito teve como objetivo avaliar a aplicabilidade da estratégia de classificação orientada a objeto na busca de possíveis indicadores de cobertura do solo relacionados com a ocorrência da LV em meio urbano. Os índices de acurácia foram altos em ambas as imagens (>90%). Na correlação da incidência da LV com os indicadores ambientais verificou-se correlações positivas com os indicadores Vegetação densa, Vegetação rasteira e Solo exposto e negativa com os indicadores Água, Urbana densa e Urbana verde, todos estatisticamente significantes. Os resultados desta tese revelam que a ocorrência da LV na periferia de Teresina está intensamente relacionada às condições socioeconômicas inadequadas e transformações ambientais decorrentes do processo de expansão urbana, favorecendo a ocorrência do vetor (Lutzomyia longipalpis) nestas regiões.
Resumo:
Lan honen helburua, Euskal Autonomia Erkidegoko energia elektrikoaren etxebizitzen eskaeraren analisia egitea da, eta azterketa honetatik ondorio zehatzak atera eta hauek azaltzea. Azterketa hau egiteko, ekonomiari aplikaturiko metodo estatitistikoak erabiliko ditugu, konkretuki metodo ekonometrikoak.
Resumo:
Molecular markers have been demonstrated to be useful for the estimation of stock mixture proportions where the origin of individuals is determined from baseline samples. Bayesian statistical methods are widely recognized as providing a preferable strategy for such analyses. In general, Bayesian estimation is based on standard latent class models using data augmentation through Markov chain Monte Carlo techniques. In this study, we introduce a novel approach based on recent developments in the estimation of genetic population structure. Our strategy combines analytical integration with stochastic optimization to identify stock mixtures. An important enhancement over previous methods is the possibility of appropriately handling data where only partial baseline sample information is available. We address the potential use of nonmolecular, auxiliary biological information in our Bayesian model.
Resumo:
O propósito da tese é analisar em que circunstâncias presidentes brasileiros recorrem a mecanismos de controle político sobre a burocracia pública. O argumento central é que o recurso presidencial a nomeações políticas, decretos regulamentares detalhados e criação de órgãos públicos centralizados na Presidência deverá variar em função de fatores políticos e características das coalizões de governo. Por meio de nomeações políticas, presidentes podem monitorar o comportamento de servidores públicos sob a influência indesejada de ministros do gabinete. Com decretos regulamentares detalhados podem reduzir a autonomia decisória de servidores públicos na interpretação de leis vagas. Por fim, por meio da criação de órgãos públicos centralizados na Presidência, podem gerar condições mais favoráveis ao futuro controle da burocracia pública. O propósito da tese será desdobrado em três problemas de pesquisa, com desenhos orientados para variáveis. O primeiro, desenvolvido no primeiro capítulo, aborda como a heterogeneidade política da coalizão afeta o controle presidencial sobre a burocracia pública por meio de nomeações políticas. O segundo problema, discutido no capítulo seguinte, analisa como a rotatividade ministerial e a demanda pela implementação interministerial de uma mesma lei afetam o grau de detalhamento de decretos regulamentares. Por fim, o terceiro problema de pesquisa, abordado no último capítulo, avalia como a composição heterogênea dos gabinetes afeta a criação de burocracias centralizadas na Presidência da República. Por meio de métodos estatísticos, foram estimados modelos de regressão linear multivariada a fim de analisar os determinantes 1. das nomeações políticas e 2. do grau de detalhamento dos decretos regulamentares, bem como modelos de regressão logística binária para avaliar a probabilidade de centralização presidencial na criação de órgãos públicos. A politização da burocracia federal tende a aumentar quando o conflito entre parceiros da coalizão é maior, uma alternativa presidencial às orientações ministeriais indesejadas sobre a burocracia pública. Decretos regulamentares tendem a ser mais detalhados quando ministérios são mais voláteis e quando há implementação interministerial, uma alternativa presidencial à autonomia da burocracia pública. Por fim, a centralização tende a crescer quando o conflito de políticas entre presidente e ministros é maior, uma saída às orientações ministeriais nocivas às preferências do presidente.
Resumo:
In this paper we present livestock breeding developments that could be taken into consideration in the genetic improvement of farmed aquaculture species, especially in freshwater fish. Firstly, the current breeding objective in aquatic species has focused almost exclusively on the improvement of body weight at harvest or on growth related traits. This is unlikely to be sufficient to meet the future needs of the aquaculture industry. To meet future demands breeding programs will most likely have to include additional traits, such as fitness related ones (survival, disease resistance), feed efficiency, or flesh quality, rather than only growth performance. In order to select for a multi-trait breeding objective, genetic variation in traits of interest and the genetic relationships among them need to be estimated. In addition, economic values for these traits will be required. Generally, there is a paucity of data on variable and fixed production costs in aquaculture, and this could be a major constraint in the further expansion of the breeding objectives. Secondly, genetic evaluation systems using the restricted maximum likelihood method (REML) and best linear unbiased prediction (BLUP) in a framework of mixed model methodology could be widely adopted to replace the more commonly used method of mass selection based on phenotypic performance. The BLUP method increases the accuracy of selection and also allows the management of inbreeding and estimation of genetic trends. BLUP is an improvement over the classic selection index approach, which was used in the success story of the genetically improved farmed tilapia (GIFT) in the Philippines, with genetic gains from 10 to 20 per cent per generation of selection. In parallel with BLUP, optimal genetic contribution theory can be applied to maximize genetic gain while constraining inbreeding in the long run in selection programs. Thirdly, by using advanced statistical methods, genetic selection can be carried out not only at the nucleus level but also in lower tiers of the pyramid breeding structure. Large scale across population genetic evaluation through genetic connectedness using cryopreserved sperm enables the comparison and ranking of genetic merit of all animals across populations, countries or years, and thus the genetically superior brood stock can be identified and widely used and exchanged to increase the rate of genetic progress in the population as a whole. It is concluded that sound genetic programs need to be established for aquaculture species. In addition to being very effective, fully pedigreed breeding programs would also enable the exploration of possibilities of integrating molecular markers (e.g., genetic tagging using DNA fingerprinting, marker (gene) assisted selection) and reproductive technologies such as in-vitro fertilization using cryopreserved spermatozoa.
Resumo:
Details are given of a standard format used by the Pond Dynamics/Aquaculture Collaborative Research Support Program of the US Agency for International Development for the communication of experimental ideas. An example is given of the "Preliminary Proposal Format," which contains a list of information categories or headings as follows: Title; Objectives: Significance; Experimental design; Pond facilities; Stocking rate; Other inputs; Sampling plan; Hypotheses; Statistical methods; Duration; Water management; and Schedule.