994 resultados para digital signatures
Resumo:
Foram estudadas, pelo método da assinatura digital, 35 biópsias esofágicas provenientes de pacientes da província de Linxian, China, classificadas por dois observadores com ampla experiência em patologia gastrointestinal como normais, displasias ou carcinomas (8 casos normais, 6 displasias leves, 8 displasias moderadas, 4 displasias acentuadas, 4 carcinomas suspeitos de invasão e 5 carcinomas invasores). O objetivo do trabalho foi caracterizar os núcleos das populações celulares desses casos de forma que permitisse a derivação de informações diagnósticas e de possível implicação prognóstica a partir do estudo quantitativo das características nucleares de cada caso ou categoria diagnóstica. As biópsias foram coradas pelo método de Feulgen, sendo então selecionados e digitalizados 48 a 50 núcleos de cada uma delas. De cada núcleo foram extraídas 93 características cariométricas, arranjadas arbitrariamente em histograma designado como assinatura nuclear. Da média aritmética de cada característica dos núcleos de uma mesma biópsia resultou a assinatura digital do caso. A análise de funções discriminantes, baseada nas 15 características cariométricas que ofereceram melhor discriminação entre as categorias diagnósticas, mostrou que o grupo classificado como normal foi claramente distinto das demais categorias. A densidade óptica total aumentou progressivamente segundo a classificação das biópsias, do normal à displasia acentuada, sendo o valor do carcinoma semelhante ao da displasia moderada. A matriz de comprimento de seqüência apresentou o mesmo perfil, ou seja, ambas as características ofereceram discriminação clara entre as categorias diagnósticas, com exceção do carcinoma invasor, cujos valores foram superponíveis aos da displasia moderada. O estudo demonstrou a viabilidade da quantificação de características nucleares através das assinaturas nucleares digitais, que demonstraram diferenças estatisticamente significativas entre diferentes categorias diagnósticas e a elevação progressiva dos valores mensurados relacionados com o espectro das lesões, apresentando-as como um histograma (assinatura digital nuclear).
Resumo:
Partendo dal campione di AGN presente nella survey di XMM-COSMOS, abbiamo cercato la sua controparte ottica nel database DR10 della Sloan Digital Sky Survey (SDSS), ed il match ha portato ad una selezione di 200 oggetti, tra cui stelle, galassie e quasar. A partire da questo campione, abbiamo selezionato tutti gli oggetti con un redshift z<0.86 per limitare l’analisi agli AGN di tipo 2, quindi siamo giunti alla selezione finale di un campione di 30 sorgenti. L’analisi spettrale è stata fatta tramite il task SPECFIT, presente in IRAF. Abbiamo creato due tipi di modelli: nel primo abbiamo considerato un’unica componente per ogni riga di emissione, nel secondo invece è stata introdotta un’ulteriore com- ponente limitando la FWHM della prima ad un valore inferiore a 500 km\s. Le righe di emissione di cui abbiamo creato un modello sono le seguenti: Hβ, [NII]λλ 6548,6581, Hα, [SII]λλ 6716,6731 e [OIII]λλ 4959,5007. Nei modelli costruiti abbiamo tenuto conto della fisica atomica per quel che riguarda i rapporti dei flussi teorici dei doppietti dell’azoto e dell’ossigeno, fissandoli a 1:3 per entrambi; nel caso del modello ad una componente abbiamo fissato le FWHM delle righe di emissione; mentre nel caso a due componenti abbiamo fissato le FWHM delle componenti strette e larghe, separatamente. Tenendo conto del chi-quadro ottenuto da ogni fit e dei residui, è stato possibile scegliere tra i due modelli per ogni sorgente. Considerato che la nostra attenzione è focalizzata sulla cinematica dell’ossigeno, abbiamo preso in considerazione solo le sorgenti i cui spettri mostravano la riga suddetta, cioè 25 oggetti. Su questa riga è stata fatta un’analisi non parametrica in modo da utilizzare il metodo proposto da Harrison et al. (2014) per caratterizzare il profilo di riga. Sono state determinate quantità utili come il 2 e il 98 percentili, corrispondenti alle velocità massime proiettate del flusso di materia, e l’ampiezza di riga contenente l’80% dell’emissione. Per indagare sull’eventuale ruolo che ha l’AGN nel guidare questi flussi di materia verso l’esterno, abbiamo calcolato la massa del gas ionizzato presente nel flusso e il tasso di energia cinetica, tenendo conto solo delle componenti larghe della riga di [OIII] λ5007. Per la caratterizzazione energetica abbiamo considerato l’approccio di Cano-Diaz et al (2012) e di Heckman (1990) in modo da poter ottenere un limite inferiore e superiore della potenza cinetica, adottando una media geometrica tra questi due come valore indicativo dell’energetica coinvolta. Confrontando la potenza del flusso di gas con la luminosità bolometrica dell’AGN, si è trovato che l’energia cinetica del flusso di gas è circa lo 0.3-30% della luminosità dell’AGN, consistente con i modelli che considerano l’AGN come principale responsabile nel guidare questi flussi di gas.
Resumo:
The use of canines as a method of detection of explosives is well established worldwide and those applying this technology range from police forces and law enforcement to humanitarian agencies in the developing world. Despite the recent surge in publication of novel instrumental sensors for explosives detection, canines are still regarded by many to be the most effective real-time field method of explosives detection. However, unlike instrumental methods, currently it is difficult to determine detection levels, perform calibration of the canines' ability or produce scientifically valid quality control checks. Accordingly, amongst increasingly strict requirements regarding forensic evidence admission such as Frye and Daubert, there is a need for better scientific understanding of the process of canine detection. ^ When translated to the field of canine detection, just like any instrumental technique, peer reviewed publication of the reliability, success and error rates, is required for admissibility. Commonly training is focussed towards high explosives such as TNT and Composition 4, and the low explosives such as Black and Smokeless Powders are added often only for completeness. ^ Headspace analyses of explosive samples, performed by Solid Phase Microextraction (SPME) paired with Gas Chromatography - Mass Spectrometry (GC-MS), and Gas Chromatography - Electron Capture Detection (GC-ECD) was conducted, highlighting common odour chemicals. The odour chemicals detected were then presented to previously trained and certified explosives detection canines, and the activity/inactivity of the odour determined through field trials and experiments. ^ It was demonstrated that TNT and cast explosives share a common odour signature, and the same may be said for plasticized explosives such as Composition C-4 and Deta Sheet. Conversely, smokeless powders were demonstrated not to share common odours. An evaluation of the effectiveness of commercially available pseudo aids reported limited success. The implications of the explosive odour studies upon canine training then led to the development of novel inert training aids based upon the active odours determined. ^
Resumo:
Stable isotopes are important tools for understanding the trophic roles of elasmobranchs. However, whether different tissues provide consistent stable isotope values within an individual are largely unknown. To address this, the relationships among carbon and nitrogen isotope values were quantified for blood, muscle, and fin from juvenile bull sharks (Carcharhinus leucas) and blood and fin from large tiger sharks (Galeocerdo cuvier) collected in two different ecosystems. We also investigated the relationship between shark size and the magnitude of differences in isotopic values between tissues. Isotope values were significantly positively correlated for all paired tissue comparisons, but R2 values were much higher for δ13C than for δ15N. Paired differences between isotopic values of tissues were relatively small but varied significantly with shark total length, suggesting that shark size can be an important factor influencing the magnitude of differences in isotope values of different tissues. For studies of juvenile sharks, care should be taken in using slow turnover tissues like muscle and fin, because they may retain a maternal signature for an extended time. Although correlations were relatively strong, results suggest that correction factors should be generated for the desired study species and may only allow coarse-scale comparisons between studies using different tissue types.
Resumo:
Mineral and chemical composition of alluvial Upper-Pleistocene deposits from the Alto Guadalquivir Basin (SE Spain) were studied as a tool to identify sedimentary and geomorphological processes controlling its formation. Sediments located upstream, in the north-eastern sector of the basin, are rich in dolomite, illite, MgO and KB2BO. Downstream, sediments at the sequence base are enriched in calcite, smectite and CaO, whereas the upper sediments have similar features to those from upstream. Elevated rare-earth elements (REE) values can be related to low carbonate content in the sediments and the increase of silicate material produced and concentrated during soil formation processes in the neighbouring source areas. Two mineralogical and geochemical signatures related to different sediment source areas were identified. Basal levels were deposited during a predominantly erosive initial stage, and are mainly composed of calcite and smectite materials enriched in REE coming from Neogene marls and limestones. Then the deposition of the upper levels of the alluvial sequences, made of dolomite and illitic materials depleted in REE coming from the surrounding Sierra de Cazorla area took place during a less erosive later stage of the fluvial system. Such modification was responsible of the change in the mineralogical and geochemical composition of the alluvial sediments.
Resumo:
Cloud edge mixing plays an important role in the life cycle and development of clouds. Entrainment of subsaturated air affects the cloud at the microscale, altering the number density and size distribution of its droplets. The resulting effect is determined by two timescales: the time required for the mixing event to complete, and the time required for the droplets to adjust to their new environment. If mixing is rapid, evaporation of droplets is uniform and said to be homogeneous in nature. In contrast, slow mixing (compared to the adjustment timescale) results in the droplets adjusting to the transient state of the mixture, producing an inhomogeneous result. Studying this process in real clouds involves the use of airborne optical instruments capable of measuring clouds at the `single particle' level. Single particle resolution allows for direct measurement of the droplet size distribution. This is in contrast to other `bulk' methods (i.e. hot-wire probes, lidar, radar) which measure a higher order moment of the distribution and require assumptions about the distribution shape to compute a size distribution. The sampling strategy of current optical instruments requires them to integrate over a path tens to hundreds of meters to form a single size distribution. This is much larger than typical mixing scales (which can extend down to the order of centimeters), resulting in difficulties resolving mixing signatures. The Holodec is an optical particle instrument that uses digital holography to record discrete, local volumes of droplets. This method allows for statistically significant size distributions to be calculated for centimeter scale volumes, allowing for full resolution at the scales important to the mixing process. The hologram also records the three dimensional position of all particles within the volume, allowing for the spatial structure of the cloud volume to be studied. Both of these features represent a new and unique view into the mixing problem. In this dissertation, holographic data recorded during two different field projects is analyzed to study the mixing structure of cumulus clouds. Using Holodec data, it is shown that mixing at cloud top can produce regions of clear but humid air that can subside down along the edge of the cloud as a narrow shell, or advect down shear as a `humid halo'. This air is then entrained into the cloud at lower levels, producing mixing that appears to be very inhomogeneous. This inhomogeneous-like mixing is shown to be well correlated with regions containing elevated concentrations of large droplets. This is used to argue in favor of the hypothesis that dilution can lead to enhanced droplet growth rates. I also make observations on the microscale spatial structure of observed cloud volumes recorded by the Holodec.
Resumo:
Language is a unique aspect of human communication because it can be used to discuss itself in its own terms. For this reason, human societies potentially have superior capacities of co-ordination, reflexive self-correction, and innovation than other animal, physical or cybernetic systems. However, this analysis also reveals that language is interconnected with the economically and technologically mediated social sphere and hence is vulnerable to abstraction, objectification, reification, and therefore ideology – all of which are antithetical to its reflexive function, whilst paradoxically being a fundamental part of it. In particular, in capitalism, language is increasingly commodified within the social domains created and affected by ubiquitous communication technologies. The advent of the so-called ‘knowledge economy’ implicates exchangeable forms of thought (language) as the fundamental commodities of this emerging system. The historical point at which a ‘knowledge economy’ emerges, then, is the critical point at which thought itself becomes a commodified ‘thing’, and language becomes its “objective” means of exchange. However, the processes by which such commodification and objectification occurs obscures the unique social relations within which these language commodities are produced. The latest economic phase of capitalism – the knowledge economy – and the obfuscating trajectory which accompanies it, we argue, is destroying the reflexive capacity of language particularly through the process of commodification. This can be seen in that the language practices that have emerged in conjunction with digital technologies are increasingly non-reflexive and therefore less capable of self-critical, conscious change.