929 resultados para ZERO-OR-ONE INFLATED BETA DISTRIBUTION
Resumo:
The effects of continuous tillage on the distribution of soil organic matter (SOM) and aggregates have been well studied for arable soils. However, less is known about the effects of sporadic tillage on SOM and aggregate dynamics in grassland soils. The objectives of the present thesis were (I) to study the longer-term effects of sporadic tillage of grassland on organic carbon (Corg) stocks and the distribution of aggregates and SOM, (II) to investigate the combined effects of sporadic tillage and fertilization on carbon and nitrogen dynamics in grassland soils, and (III) to study the temporal dynamics of Corg stocks, aggregate distribution and microbial biomass in grassland soils. Soil samples were taken in three soil depths (0 – 10 cm; 10 – 25 cm; 25 – 40 cm) from a field trial with loamy sandy soils (Cambisols, Eutric Luvisols, Stagnosols, Anthrosols) north of Kiel, Germany. For Objective I we have sampled soil two and five years after one or two tillage operation(s). Treatments consisted of (i) permanent grassland, (ii) tillage of grassland followed by a re-establishment of grassland and (iii) tillage of grassland followed by a re-establishment of grassland with one season of winter wheat in between. The tillage in grassland led to a reduction in Corg stocks, large macroaggregates (>2000 µm) and SOM in the top 10 cm soil depth. These findings were still significant two years after tillage; however, five years after tillage no longer present. Regarding the soil profile (0 – 40 cm) no significant differences in the mentioned parameters between the tilled plots and the permanent grassland existed. A second tillage event and the insertion of one season of winter wheat did not lead to any further effects on Corg stocks as well as aggregate and SOM concentrations in comparison with a single tillage event in these grassland soils. Treatments adapted for Objective II included (i) long-term grassland and (ii) tillage of grassland followed by a re-establishment of grassland with one season of winter wheat in between. The plots were split and received either 240 kg N ha-1 year-1 in the form of cattle slurry or no cattle slurry application. The application of slurry within a period of four years had no effects on the Corg and total nitrogen stocks or the aggregate distribution, but led to a reduction of free and not physically protected SOM. However, the application of cattle slurry and the grassland renovation seems to change the plant species composition and therefore generalizations on the direct effects are not yet possible. For studying Objective III a further field trial was initiated in September 2010. Soil samples were taken six times within one year (from October 2010 to October 2011) (i) after the conversion from arable land into grassland, (ii) after the tillage of grassland followed by a re-establishment of grassland and (iii) in a permanent grassland. We found an increase in the microbial and fungal biomass after the conversion of arable land into grassland, but no effect on aggregate distribution and Corg stocks. A one-time tillage operation in grassland led to a reduction in large macroaggregates and Corg stocks in the top 10 cm soil depth with no effect on the sampled soil profile. However, we found large variations in the fungal biomass and aggregate distribution within one year in the permanent grassland, presumably caused by environmental factors. Overall, our results suggest that a single tillage operation in grassland soils markedly decreased the concentrations of Corg, larger aggregates and SOM. However, this does not result in long-lasting effects on the above mentioned parameters. The application of slurry cannot compensate the negative effects of a tillage event on aggregate concentrations or Corg stocks. However, while the Corg concentration is not subject to fluctuations within a year, there are large variations of the aggregate distribution even in a permanent grassland soil. Therefore conclusions of results from a single sampling time should be handled with care.
Resumo:
One of the tantalising remaining problems in compositional data analysis lies in how to deal with data sets in which there are components which are essential zeros. By an essential zero we mean a component which is truly zero, not something recorded as zero simply because the experimental design or the measuring instrument has not been sufficiently sensitive to detect a trace of the part. Such essential zeros occur in many compositional situations, such as household budget patterns, time budgets, palaeontological zonation studies, ecological abundance studies. Devices such as nonzero replacement and amalgamation are almost invariably ad hoc and unsuccessful in such situations. From consideration of such examples it seems sensible to build up a model in two stages, the first determining where the zeros will occur and the second how the unit available is distributed among the non-zero parts. In this paper we suggest two such models, an independent binomial conditional logistic normal model and a hierarchical dependent binomial conditional logistic normal model. The compositional data in such modelling consist of an incidence matrix and a conditional compositional matrix. Interesting statistical problems arise, such as the question of estimability of parameters, the nature of the computational process for the estimation of both the incidence and compositional parameters caused by the complexity of the subcompositional structure, the formation of meaningful hypotheses, and the devising of suitable testing methodology within a lattice of such essential zero-compositional hypotheses. The methodology is illustrated by application to both simulated and real compositional data
Resumo:
This analysis was stimulated by the real data analysis problem of household expenditure data. The full dataset contains expenditure data for a sample of 1224 households. The expenditure is broken down at 2 hierarchical levels: 9 major levels (e.g. housing, food, utilities etc.) and 92 minor levels. There are also 5 factors and 5 covariates at the household level. Not surprisingly, there are a small number of zeros at the major level, but many zeros at the minor level. The question is how best to model the zeros. Clearly, models that try to add a small amount to the zero terms are not appropriate in general as at least some of the zeros are clearly structural, e.g. alcohol/tobacco for households that are teetotal. The key question then is how to build suitable conditional models. For example, is the sub-composition of spending excluding alcohol/tobacco similar for teetotal and non-teetotal households? In other words, we are looking for sub-compositional independence. Also, what determines whether a household is teetotal? Can we assume that it is independent of the composition? In general, whether teetotal will clearly depend on the household level variables, so we need to be able to model this dependence. The other tricky question is that with zeros on more than one component, we need to be able to model dependence and independence of zeros on the different components. Lastly, while some zeros are structural, others may not be, for example, for expenditure on durables, it may be chance as to whether a particular household spends money on durables within the sample period. This would clearly be distinguishable if we had longitudinal data, but may still be distinguishable by looking at the distribution, on the assumption that random zeros will usually be for situations where any non-zero expenditure is not small. While this analysis is based on around economic data, the ideas carry over to many other situations, including geological data, where minerals may be missing for structural reasons (similar to alcohol), or missing because they occur only in random regions which may be missed in a sample (similar to the durables)
Resumo:
A novel test of spatial independence of the distribution of crystals or phases in rocks based on compositional statistics is introduced. It improves and generalizes the common joins-count statistics known from map analysis in geographic information systems. Assigning phases independently to objects in RD is modelled by a single-trial multinomial random function Z(x), where the probabilities of phases add to one and are explicitly modelled as compositions in the K-part simplex SK. Thus, apparent inconsistencies of the tests based on the conventional joins{count statistics and their possibly contradictory interpretations are avoided. In practical applications we assume that the probabilities of phases do not depend on the location but are identical everywhere in the domain of de nition. Thus, the model involves the sum of r independent identical multinomial distributed 1-trial random variables which is an r-trial multinomial distributed random variable. The probabilities of the distribution of the r counts can be considered as a composition in the Q-part simplex SQ. They span the so called Hardy-Weinberg manifold H that is proved to be a K-1-affine subspace of SQ. This is a generalisation of the well-known Hardy-Weinberg law of genetics. If the assignment of phases accounts for some kind of spatial dependence, then the r-trial probabilities do not remain on H. This suggests the use of the Aitchison distance between observed probabilities to H to test dependence. Moreover, when there is a spatial uctuation of the multinomial probabilities, the observed r-trial probabilities move on H. This shift can be used as to check for these uctuations. A practical procedure and an algorithm to perform the test have been developed. Some cases applied to simulated and real data are presented. Key words: Spatial distribution of crystals in rocks, spatial distribution of phases, joins-count statistics, multinomial distribution, Hardy-Weinberg law, Hardy-Weinberg manifold, Aitchison geometry
Resumo:
This paper focus on the problem of locating single-phase faults in mixed distribution electric systems, with overhead lines and underground cables, using voltage and current measurements at the sending-end and sequence model of the network. Since calculating series impedance for underground cables is not as simple as in the case of overhead lines, the paper proposes a methodology to obtain an estimation of zero-sequence impedance of underground cables starting from previous single-faults occurred in the system, in which an electric arc occurred at the fault location. For this reason, the signal is previously pretreated to eliminate its peaks voltage and the analysis can be done working with a signal as close as a sinus wave as possible
Resumo:
Background: Infection with multiple types of human papillomavirus (HPV) is one of the main risk factors associated with the development of cervical lesions. In this study, cervical samples collected from 1,810 women with diverse sociocultural backgrounds, who attended to their cervical screening program in different geographical regions of Colombia, were examined for the presence of cervical lesions and HPV by Papanicolau testing and DNA PCR detection, respectively. Principal Findings: The negative binomial distribution model used in this study showed differences between the observed and expected values within some risk factor categories analyzed. Particularly in the case of single infection and coinfection with more than 4 HPV types, observed frequencies were smaller than expected, while the number of women infected with 2 to 4 viral types were higher than expected. Data analysis according to a negative binomial regression showed an increase in the risk of acquiring more HPV types in women who were of indigenous ethnicity (+37.8%), while this risk decreased in women who had given birth more than 4 times (-31.1%), or were of mestizo (-24.6%) or black (-40.9%) ethnicity. Conclusions: According to a theoretical probability distribution, the observed number of women having either a single infection or more than 4 viral types was smaller than expected, while for those infected with 2-4 HPV types it was larger than expected. Taking into account that this study showed a higher HPV coinfection rate in the indigenous ethnicity, the role of underlying factors should be assessed in detail in future studies.
Resumo:
La butirilcolinesterasa humana (BChE; EC 3.1.1.8) es una enzima polimórfica sintetizada en el hígado y en el tejido adiposo, ampliamente distribuida en el organismo y encargada de hidrolizar algunos ésteres de colina como la procaína, ésteres alifáticos como el ácido acetilsalicílico, fármacos como la metilprednisolona, el mivacurium y la succinilcolina y drogas de uso y/o abuso como la heroína y la cocaína. Es codificada por el gen BCHE (OMIM 177400), habiéndose identificado más de 100 variantes, algunas no estudiadas plenamente, además de la forma más frecuente, llamada usual o silvestre. Diferentes polimorfismos del gen BCHE se han relacionado con la síntesis de enzimas con niveles variados de actividad catalítica. Las bases moleculares de algunas de esas variantes genéticas han sido reportadas, entre las que se encuentra las variantes Atípica (A), fluoruro-resistente del tipo 1 y 2 (F-1 y F-2), silente (S), Kalow (K), James (J) y Hammersmith (H). En este estudio, en un grupo de pacientes se aplicó el instrumento validado Lifetime Severity Index for Cocaine Use Disorder (LSI-C) para evaluar la gravedad del consumo de “cocaína” a lo largo de la vida. Además, se determinaron Polimorfismos de Nucleótido Simple (SNPs) en el gen BCHE conocidos como responsables de reacciones adversas en pacientes consumidores de “cocaína” mediante secuenciación del gen y se predijo el efecto delos SNPs sobre la función y la estructura de la proteína, mediante el uso de herramientas bio-informáticas. El instrumento LSI-C ofreció resultados en cuatro dimensiones: consumo a lo largo de la vida, consumo reciente, dependencia psicológica e intento de abandono del consumo. Los estudios de análisis molecular permitieron observar dos SNPs codificantes (cSNPs) no sinónimos en el 27.3% de la muestra, c.293A>G (p.Asp98Gly) y c.1699G>A (p.Ala567Thr), localizados en los exones 2 y 4, que corresponden, desde el punto de vista funcional, a la variante Atípica (A) [dbSNP: rs1799807] y a la variante Kalow (K) [dbSNP: rs1803274] de la enzima BChE, respectivamente. Los estudios de predicción In silico establecieron para el SNP p.Asp98Gly un carácter patogénico, mientras que para el SNP p.Ala567Thr, mostraron un comportamiento neutro. El análisis de los resultados permite proponer la existencia de una relación entre polimorfismos o variantes genéticas responsables de una baja actividad catalítica y/o baja concentración plasmática de la enzima BChE y algunas de las reacciones adversas ocurridas en pacientes consumidores de cocaína.
Resumo:
Los gliomas malignos representan una de las formas más agresivas de los tumores del sistema nervioso central (SNC). De acuerdo con la clasificación de los tumores cerebrales de la Organización Mundial de la Salud (OMS), los astrocitomas han sido categorizados en cuatro grados, determinados por la patología subyacente. Es así como los gliomas malignos (o de alto grado) incluyen el glioma anaplásico (grado III) así como el glioblastoma multiforme (GBM, grado IV),estos últimos los más agresivos con el peor pronóstico (1). El manejo terapéutico de los tumores del SNC se basa en la cirugía, la radioterapia y la quimioterapia, dependiendo de las características del tumor, el estadio clínico y la edad (2),(3), sin embargo ninguno de los tratamientos estándar es completamente seguro y compatible con una calidad de vida aceptable (3), (4). En general, la quimioterapia es la primera opción en los tumores diseminados, como el glioblastoma invasivo y el meduloblastoma de alto riesgo o con metástasis múltiple, pero el pronóstico en estos pacientes es muy pobre (2),(3). Solamente nuevas terapias dirigidas (2) como las terapias anti-angiogénicas (4); o terapias génicas muestran un beneficio real en grupos limitados de pacientes con defectos moleculares específicos conocidos (4). De este modo, se hace necesario el desarrollo de nuevas terapias farmacológicas para atacar los tumores cerebrales. Frente a las terapias los gliomas malignos son con frecuencia quimioresistentes, y esta resistencia parece depender de al menos dos mecanismos: en primer lugar, la pobre penetración de muchas drogas anticáncer a través de la barrera hematoencefálica (BBB: Blood Brain Barrier), la barrera del fluido sangre-cerebroespinal (BCSFB: Blood-cerebrospinal fluid barrier) y la barrera sangre-tumor (BTB: blood-tumor barrier). Dicha resistencia se debe a la interacción de la droga con varios transportadores o bombas de eflujo de droga ABC (ABC: ATP-binding cassette) que se sobre expresan en las células endoteliales o epiteliales de estas barreras. En segundo lugar, estos transportadores de eflujo de drogas ABC propios de las células tumorales confieren un fenotipo conocido como resistencia a multidrogas (MDR: multidrug resistance), el cual es característico de varios tumores sólidos. Este fenotipo también está presente en los tumores del SNC y su papel en gliomas es objeto de investigación (5). Por consiguiente el suministro de medicamentos a través de la BBB es uno de los problemas vitales en los tratamientos de terapia dirigida. Estudios recientes han demostrado que algunas moléculas pequeñas utilizadas en estas terapias son sustratos de la glicoproteína P (Pgp: P-gycoprotein), así como también de otras bombas de eflujo como las proteínas relacionadas con la resistencia a multidrogas (MRPs: multidrug resistance-related proteins (MRPs) o la proteína relacionada con cáncer de seno (BCRP: breast-cancer resistance related protein)) que no permiten que las drogas de este tipo alcancen el tumor (1). Un sustrato de Pgp y BCRP es la DOXOrubicina (DOXO), un fármaco utilizado en la terapia anti cáncer, el cual es muy eficaz para atacar las células del tumor cerebral in vitro, pero con un uso clínico limitado por la poca entrega a través de la barrera hematoencefálica (BBB) y por la resistencia propia de los tumores. Por otra parte las células de BBB y las células del tumor cerebral tienen también proteínas superficiales, como el receptor de la lipoproteína de baja densidad (LDLR), que podría utilizarse como blanco terapéutico en BBB y tumores cerebrales. Es asi como la importancia de este estudio se basa en la generación de estrategias terapéuticas que promuevan el paso de las drogas a través de la barrera hematoencefalica y tumoral, y a su vez, se reconozcan mecanismos celulares que induzcan el incremento en la expresión de los transportadores ABC, de manera que puedan ser utilizados como blancos terapéuticos.Este estudio demostró que el uso de una nueva estrategia basada en el “Caballo de Troya”, donde se combina la droga DOXOrubicina, la cual es introducida dentro de un liposoma, salvaguarda la droga de manera que se evita su reconocimiento por parte de los transportadores ABC tanto de la BBB como de las células del tumor. La construcción del liposoma permitió utilizar el receptor LDLR de las células asegurando la entrada a través de la BBB y hacia las células tumorales a través de un proceso de endocitosis. Este mecanismo fue asociado al uso de estatinas o drogas anticolesterol las cuales favorecieron la expresión de LDLR y disminuyeron la actividad de los transportadores ABC por nitración de los mismos, incrementando la eficiencia de nuestro Caballo de Troya. Por consiguiente demostramos que el uso de una nueva estrategia o formulación denominada ApolipoDOXO más el uso de estatinas favorece la administración de fármacos a través de la BBB, venciendo la resistencia del tumor y reduciendo los efectos colaterales dosis dependiente de la DOXOrubicina. Además esta estrategia del "Caballo de Troya", es un nuevo enfoque terapéutico que puede ser considerado como una nueva estrategia para aumentar la eficacia de diferentes fármacos en varios tumores cerebrales y garantiza una alta eficiencia incluso en un medio hipóxico,característico de las células cancerosas, donde la expresión del transportador Pgp se vió aumentada. Teniendo en cuenta la relación entre algunas vías de señalización reconocidas como moduladores de la actividad de Pgp, este estudio presenta no solo la estrategia del Caballo de Troya, sino también otra propuesta terapéutica relacionada con el uso de Temozolomide más DOXOrubicina. Esta estrategia demostró que el temozolomide logra penetrar la BBB por que interviene en la via de señalización de la Wnt/GSK3/β-catenina, la cual modula la expresión del transportador Pgp. Se demostró que el TMZ disminuye la proteína y el mRNA de Wnt3 permitiendo plantear la hipótesis de que la droga al disminuir la transcripción del gen Wnt3 en células de BBB, incrementa la activación de la vía fosforilando la β-catenina y conduciendo a disminuir la β-catenina nuclear y por tanto su unión al promotor del gen mdr1. Con base en los resultados este estudio permitió el reconocimiento de tres mecanismos básicos relacionados con la expresión de los transportadores ABC y asociados a las estrategias empleadas: el primero fue el uso de las estatinas, el cual condujo a la nitración de los transportadores disminuyendo su actividad por la via del factor de transcripción NFκB; el segundo a partir del uso del temozolomide, el cual metila el gen de Wnt3 reduciendo la actividad de la via de señalización de la la β-catenina, disminuyendo la expresión del transportador Pgp. El tercero consistió en la determinación de la relación entre el eje RhoA/RhoA quinasa como un modulador de la via (no canónica) GSK3/β-catenina. Se demostró que la proteína quinasa RhoA promovió la activación de la proteína PTB1, la cual al fosforilar a GSK3 indujo la fosforilación de la β-catenina, lo cual dio lugar a su destrucción por el proteosoma, evitando su unión al promotor del gen mdr1 y por tanto reduciendo su expresión. En conclusión las estrategias propuestas en este trabajo incrementaron la citotoxicidad de las células tumorales al aumentar la permeabilidad no solo de la barrera hematoencefálica, sino también de la propia barrera tumoral. Igualmente, la estrategia del “Caballo de Troya” podría ser útil para la terapia de otras enfermedades asociadas al sistema nervioso central. Por otra parte estos estudios indican que el reconocimiento de mecanismos asociados a la expresión de los transportadores ABC podría constituir una herramienta clave en el desarrollo de nuevas terapias anticáncer.
Resumo:
El artículo analiza los determinantes de la presencia de hijos no deseados en Colombia. Se utiliza la información de la Encuesta Nacional de Demografía y Salud (ENDS, 2005), específicamente para las mujeres de 40 años o más. Dadas las características especiales de la variable que se analiza, se utilizan modelos de conteo para verificar si determinadas características socioeconómicas como la educación o el estrato económico explican la presencia de hijos no deseados. Se encuentra que la educación de la mujer y el área de residencia son determinantes significativos de los nacimientos no planeados. Además, la relación negativa entre el número de hijos no deseados y la educación de la mujer arroja implicaciones clave en materia de política social.
Resumo:
Se definen los conceptos de evaluación sumativa y evaluación formativa con el ánimo de ver si son tan diferentes y excluyentes. Se proponen caminos para combinar ambas y para incluir a los estudiantes en el proceso de evaluación.
Resumo:
La present Tesi Doctoral, titulada desenvolupament computacional de la semblança molecular quàntica, tracta, fonamentalment, els aspectes de càlcul de mesures de semblança basades en la comparació de funcions de densitat electrònica.El primer capítol, Semblança quàntica, és introductori. S'hi descriuen les funcions de densitat de probabilitat electrònica i llur significança en el marc de la mecànica quàntica. Se n'expliciten els aspectes essencials i les condicions matemàtiques a satisfer, cara a una millor comprensió dels models de densitat electrònica que es proposen. Hom presenta les densitats electròniques, mencionant els teoremes de Hohenberg i Kohn i esquematitzant la teoria de Bader, com magnituds fonamentals en la descripció de les molècules i en la comprensió de llurs propietats.En el capítol Models de densitats electròniques moleculars es presenten procediments computacionals originals per l'ajust de funcions densitat a models expandits en termes de gaussianes 1s centrades en els nuclis. Les restriccions físico-matemàtiques associades a les distribucions de probabilitat s'introdueixen de manera rigorosa, en el procediment anomenat Atomic Shell Approximation (ASA). Aquest procediment, implementat en el programa ASAC, parteix d'un espai funcional quasi complert, d'on se seleccionen variacionalment les funcions o capes de l'expansió, d'acord als requisits de no negativitat. La qualitat d'aquestes densitats i de les mesures de semblança derivades es verifica abastament. Aquest model ASA s'estén a representacions dinàmiques, físicament més acurades, en quant que afectades per les vibracions nuclears, cara a una exploració de l'efecte de l'esmorteïment dels pics nuclears en les mesures de semblança molecular. La comparació de les densitats dinàmiques respecte les estàtiques evidencia un reordenament en les densitats dinàmiques, d'acord al que constituiria una manifestació del Principi quàntic de Le Chatelier. El procediment ASA, explícitament consistent amb les condicions de N-representabilitat, s'aplica també a la determinació directe de densitats electròniques hidrogenoides, en un context de teoria del funcional de la densitat.El capítol Maximització global de la funció de semblança presenta algorismes originals per la determinació de la màxima sobreposició de les densitats electròniques moleculars. Les mesures de semblança molecular quàntica s'identifiquen amb el màxim solapament, de manera es mesuri la distància entre les molècules, independentment dels sistemes de referència on es defineixen les densitats electròniques. Partint de la solució global en el límit de densitats infinitament compactades en els nuclis, es proposen tres nivells de aproximació per l'exploració sistemàtica, no estocàstica, de la funció de semblança, possibilitant la identificació eficient del màxim global, així com també dels diferents màxims locals. Es proposa també una parametrització original de les integrals de recobriment a través d'ajustos a funcions lorentzianes, en quant que tècnica d'acceleració computacional. En la pràctica de les relacions estructura-activitat, aquests avenços possibiliten la implementació eficient de mesures de semblança quantitatives, i, paral·lelament, proporcionen una metodologia totalment automàtica d'alineació molecular. El capítol Semblances d'àtoms en molècules descriu un algorisme de comparació dels àtoms de Bader, o regions tridimensionals delimitades per superfícies de flux zero de la funció de densitat electrònica. El caràcter quantitatiu d'aquestes semblances possibilita la mesura rigorosa de la noció química de transferibilitat d'àtoms i grups funcionals. Les superfícies de flux zero i els algorismes d'integració usats han estat publicats recentment i constitueixen l'aproximació més acurada pel càlcul de les propietats atòmiques. Finalment, en el capítol Semblances en estructures cristal·lines hom proposa una definició original de semblança, específica per la comparació dels conceptes de suavitat o softness en la distribució de fonons associats a l'estructura cristal·lina. Aquests conceptes apareixen en estudis de superconductivitat a causa de la influència de les interaccions electró-fonó en les temperatures de transició a l'estat superconductor. En aplicar-se aquesta metodologia a l'anàlisi de sals de BEDT-TTF, s'evidencien correlacions estructurals entre sals superconductores i no superconductores, en consonància amb les hipòtesis apuntades a la literatura sobre la rellevància de determinades interaccions.Conclouen aquesta tesi un apèndix que conté el programa ASAC, implementació de l'algorisme ASA, i un capítol final amb referències bibliogràfiques.
Resumo:
Once abundant, the Newfoundland Gray-cheeked Thrush (Catharus minimus minimus) has declined by as much as 95% since 1975. Underlying cause(s) of this population collapse are not known, although hypotheses include loss of winter habitat and the introduction of red squirrels (Tamiasciurus hudsonicus) to Newfoundland. Uncertainties regarding habitat needs are also extensive, and these knowledge gaps are an impediment to conservation. We investigated neighborhood (i.e., within 115 m [4.1 ha]) and landscape scale (i.e., within 1250 m [490.8 ha]) habitat associations of Gray-cheeked Thrush in a 200-km² study area in the Long Range Mountains of western Newfoundland, where elevations range from 300-600 m and landcover was a matrix of old growth fir forest, 6- to 8-year-old clearcuts, coniferous scrub, bogs, and barrens. Thrushes were restricted to elevations above ~375 m, and occurrence was strongly positively related to elevation. Occurrence was also positively related to cover of tall scrub forest at the neighborhood scale, and at the landscape scale showed curvilinear relations with the proportion of both tall scrub and old growth forest that peaked with intermediate amounts of cover. Occurrence of thrushes was also highest when clearcuts made up 60%-70% of neighborhood landcover, but was negatively related to cover of clearcuts in the broader landscape. Finally, occurrence was highest in areas having 50% cover of partially harvested forest (strip cuts or row cuts) at the neighborhood scale, but because this treatment was limited to one small portion of the study area, this finding may be spurious. Taken together, our results suggest selection for mixed habitats and sensitivity to both neighborhood and landscape-scale habitat. More research is needed on responses of thrushes to forestry, including use of older clearcuts, partially harvested stands, and precommercially thinned clearcuts. Finally, restriction of thrushes to higher elevations is consistent with the hypothesis that they have been impacted by squirrels, because squirrels were rare or absent at these elevations.
Resumo:
Since the advent of the internet in every day life in the 1990s, the barriers to producing, distributing and consuming multimedia data such as videos, music, ebooks, etc. have steadily been lowered for most computer users so that almost everyone with internet access can join the online communities who both produce, consume and of course also share media artefacts. Along with this trend, the violation of personal data privacy and copyright has increased with illegal file sharing being rampant across many online communities particularly for certain music genres and amongst the younger age groups. This has had a devastating effect on the traditional media distribution market; in most cases leaving the distribution companies and the content owner with huge financial losses. To prove that a copyright violation has occurred one can deploy fingerprinting mechanisms to uniquely identify the property. However this is currently based on only uni-modal approaches. In this paper we describe some of the design challenges and architectural approaches to multi-modal fingerprinting currently being examined for evaluation studies within a PhD research programme on optimisation of multi-modal fingerprinting architectures. Accordingly we outline the available modalities that are being integrated through this research programme which aims to establish the optimal architecture for multi-modal media security protection over the internet as the online distribution environment for both legal and illegal distribution of media products.
Resumo:
The soil microflora is very heterogeneous in its spatial distribution. The origins of this heterogeneity and its significance for soil function are not well understood. A problem for understanding spatial variation better is the assumption of statistical stationarity that is made in most of the statistical methods used to assess it. These assumptions are made explicit in geostatistical methods that have been increasingly used by soil biologists in recent years. Geostatistical methods are powerful, particularly for local prediction, but they require the assumption that the variability of a property of interest is spatially uniform, which is not always plausible given what is known about the complexity of the soil microflora and the soil environment. We have used the wavelet transform, a relatively new innovation in mathematical analysis, to investigate the spatial variation of abundance of Azotobacter in the soil of a typical agricultural landscape. The wavelet transform entails no assumptions of stationarity and is well suited to the analysis of variables that show intermittent or transient features at different spatial scales. In this study, we computed cross-variograms of Azotobacter abundance with the pH, water content and loss on ignition of the soil. These revealed scale-dependent covariation in all cases. The wavelet transform also showed that the correlation of Azotobacter abundance with all three soil properties depended on spatial scale, the correlation generally increased with spatial scale and was only significantly different from zero at some scales. However, the wavelet analysis also allowed us to show how the correlation changed across the landscape. For example, at one scale Azotobacter abundance was strongly correlated with pH in part of the transect, and not with soil water content, but this was reversed elsewhere on the transect. The results show how scale-dependent variation of potentially limiting environmental factors can induce a complex spatial pattern of abundance in a soil organism. The geostatistical methods that we used here make assumptions that are not consistent with the spatial changes in the covariation of these properties that our wavelet analysis has shown. This suggests that the wavelet transform is a powerful tool for future investigation of the spatial structure and function of soil biota. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Particle size distribution (psd) is one of the most important features of the soil because it affects many of its other properties, and it determines how soil should be managed. To understand the properties of chalk soil, psd analyses should be based on the original material (including carbonates), and not just the acid-resistant fraction. Laser-based methods rather than traditional sedimentation methods are being used increasingly to determine particle size to reduce the cost of analysis. We give an overview of both approaches and the problems associated with them for analyzing the psd of chalk soil. In particular, we show that it is not appropriate to use the widely adopted 8 pm boundary between the clay and silt size fractions for samples determined by laser to estimate proportions of these size fractions that are equivalent to those based on sedimentation. We present data from field and national-scale surveys of soil derived from chalk in England. Results from both types of survey showed that laser methods tend to over-estimate the clay-size fraction compared to sedimentation for the 8 mu m clay/silt boundary, and we suggest reasons for this. For soil derived from chalk, either the sedimentation methods need to be modified or it would be more appropriate to use a 4 pm threshold as an interim solution for laser methods. Correlations between the proportions of sand- and clay-sized fractions, and other properties such as organic matter and volumetric water content, were the opposite of what one would expect for soil dominated by silicate minerals. For water content, this appeared to be due to the predominance of porous, chalk fragments in the sand-sized fraction rather than quartz grains, and the abundance of fine (<2 mu m) calcite crystals rather than phyllosilicates in the clay-sized fraction. This was confirmed by scanning electron microscope (SEM) analyses. "Of all the rocks with which 1 am acquainted, there is none whose formation seems to tax the ingenuity of theorists so severely, as the chalk, in whatever respect we may think fit to consider it". Thomas Allan, FRS Edinburgh 1823, Transactions of the Royal Society of Edinburgh. (C) 2009 Natural Environment Research Council (NERC) Published by Elsevier B.V. All rights reserved.