465 resultados para Histogram Equalization
Resumo:
Proposed is a symbol-based decision-directed algorithm for blind equalisation of quadrature amplitude modulation (QAM) signals using a decision feedback scheme. Independently of QAM order, it presents: (i) an error equal to zero when the equaliser output coincides with the transmitted signal; (ii) simultaneous recovery of the modulus and phase of the signal; (iii) a misadjustment close to that of the normalised least-mean squares algorithm; (iv) fast convergence; and (v) the avoidance of degenerative solutions. Additionally, its stability is ensured when the step-size is properly chosen.
Resumo:
Objective: To evaluate the association between gender and use of alcohol, tobacco, and other drugs in adolescents aged 10 to 18 years in the municipalities of Jacare and Diadema, Sao Paulo, Brazil. Methods: A total of 971 adolescents completed the Drug Use Screening Inventory (DUSI). Results: In our sample, 55% of adolescents were male, 33.8% reported having made use in the previous month of alcohol, 13.5% of cigarettes, and 6.4% of illicit drugs. There was no significant difference between genders in the use of alcohol, tobacco, and illicit drugs in any of the analysis (p > 0.05). The use of alcohol, tobacco, and illicit drugs was associated with the city, age, educational level, school failure, and relationship with parents (p < 0.05). Conclusions: Substance abuse among adolescents in our sample seems to follow the recent global trend towards the equalization of drug use between genders. This result should be taken into account by public health professionals in developing policies for this problem. (C) 2012 Elsevier Editora Ltda. All rights reserved.
Resumo:
We investigate the nonequilibrium roughening transition of a one-dimensional restricted solid-on-solid model by directly sampling the stationary probability density of a suitable order parameter as the surface adsorption rate varies. The shapes of the probability density histograms suggest a typical Ginzburg-Landau scenario for the phase transition of the model, and estimates of the "magnetic" exponent seem to confirm its mean-field critical behavior. We also found that the flipping times between the metastable phases of the model scale exponentially with the system size, signaling the breaking of ergodicity in the thermodynamic limit. Incidentally, we discovered that a closely related model not considered before also displays a phase transition with the same critical behavior as the original model. Our results support the usefulness of off-critical histogram techniques in the investigation of nonequilibrium phase transitions. We also briefly discuss in the appendix a good and simple pseudo-random number generator used in our simulations.
Resumo:
The escape dynamics of a classical light ray inside a corrugated waveguide is characterised by the use of scaling arguments. The model is described via a two-dimensional nonlinear and area preserving mapping. The phase space of the mapping contains a set of periodic islands surrounded by a large chaotic sea that is confined by a set of invariant tori. When a hole is introduced in the chaotic sea, letting the ray escape, the histogram of frequency of the number of escaping particles exhibits rapid growth, reaching a maximum value at n(p) and later decaying asymptotically to zero. The behaviour of the histogram of escape frequency is characterised using scaling arguments. The scaling formalism is widely applicable to critical phenomena and useful in characterisation of phase transitions, including transitions from limited to unlimited energy growth in two-dimensional time varying billiard problems. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Abstract Background It has been speculated that the biostimulatory effect of Low Level Laser Therapy could cause undesirable enhancement of tumor growth in neoplastic diseases. The aim of the present study is to analyze the behavior of melanoma cells (B16F10) in vitro and the in vivo development of melanoma in mice after laser irradiation. Methods We performed a controlled in vitro study on B16F10 melanoma cells to investigate cell viability and cell cycle changes by the Tripan Blue, MTT and cell quest histogram tests at 24, 48 and 72 h post irradiation. The in vivo mouse model (male Balb C, n = 21) of melanoma was used to analyze tumor volume and histological characteristics. Laser irradiation was performed three times (once a day for three consecutive days) with a 660 nm 50 mW CW laser, beam spot size 2 mm2, irradiance 2.5 W/cm2 and irradiation times of 60s (dose 150 J/cm2) and 420s (dose 1050 J/cm2) respectively. Results There were no statistically significant differences between the in vitro groups, except for an increase in the hypodiploid melanoma cells (8.48 ± 1.40% and 4.26 ± 0.60%) at 72 h post-irradiation. This cancer-protective effect was not reproduced in the in vivo experiment where outcome measures for the 150 J/cm2 dose group were not significantly different from controls. For the 1050 J/cm2 dose group, there were significant increases in tumor volume, blood vessels and cell abnormalities compared to the other groups. Conclusion LLLT Irradiation should be avoided over melanomas as the combination of high irradiance (2.5 W/cm2) and high dose (1050 J/cm2) significantly increases melanoma tumor growth in vivo.
Resumo:
[ES] El presente TFG consiste en una aplicación para la detección de personas de cuerpo entero. La idea es aplicar este detector a las continuas imágenes recogidas en tiempo real a través de una web-cam, o de un archivo con formato de vídeo que se encuentre ubicado en el propio sistema. El código está escrito en C++. Para conseguir este objetivo nos basamos en el uso conjunto de dos sistemas de detección ya existentes: primero, OpenCV, mediante un método de histograma de gradientes orientados, el cual ya proporciona propiamente un detector de personas que será aplicado a cada una de las imágenes del stream de vídeo; por otro lado, el detector facial de la librería Encara que se aplica a cada una de las detecciones de supuestas personas obtenidas en el método de OpenCV, para comprobar si hay una cara en la supuesta persona detectada. En caso de ser así, y de haber una cara más o menos correctamente situada, determinamos que es realmente una persona. Para cada persona detectada se guardan sus datos de situación en la imagen, en una lista, para posteriormente compararlos con los datos obtenidos en frames anteriores, e intentar hacer un seguimiento de todas las personas. Visualmente se observaría como se va recuadrando cada persona con un color determinado aleatorio asignado a cada una, mientras se visualiza el vídeo. También se registra la hora y frame de aparición, y la hora y frame de salida, de cada persona detectada, quedando estos datos guardados tanto en un fichero de log, como en una base de datos. Los resultados son, bastante satisfactorios, aunque con posibilidades de mejora, ya que es un trabajo que permite combinar otras técnicas diferentes a las descritas. Debido a la complejidad de los métodos empleados se destaca la necesidad de alta capacidad de computación para poder ejecutar la aplicación en tiempo real sin ralentizaciones.
Resumo:
Máster Universitario en Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)
Resumo:
Monte Carlo-Simulationen zum kritischen Verhalten dünnerIsing-Filme Dünne Ising-Filme können als vereinfachtes Modell zurBeschreibung von binären Mischungen oder von Flüssigkeitenin schlitzartigen Kapillaren dienen. Infolge dereingeschränkten Geometrie unterscheidet sich das kritischeVerhalten dieser Systeme signifikant von dem einesBulk-Systems, es kommt zu einem Crossover von zwei- zudreidimensionalem kritischen Verhalten. Zusätzlichverschiebt sich der Phasenübergang in den ungesättigtenBereich, ein Effekt, der als 'capillary condensation'bezeichnet wird. In der vorliegenden Arbeit wurden die kritischenEigenschaften von Ising-Filmen im Rahmen einer MonteCarlo-Simulation untersucht. Zur Verbesserung der Effizienzwurde ein Cluster-Algorithmus verwendet, der um einenGhost-Spin-Term zur Behandlung der Magnetfelder erweitertwar. Bei der Datenanalyse kamen moderneMulti-Histogramm-Techniken zur Anwendung. Für alle untersuchten Schichtdicken konnten kritischeTemperatur und Magnetfeld sehr präzise bestimmt werden. DieSkalenhypothese von Fisher und Nakanishi, die dieVerschiebung des kritischen Punktes gegenüber seinesBulk-Wertes beschreibt, wurde sowohl für Systeme mit freienOberflächen als auch für Systeme mit schwachemOberflächenfeld bestätigt. Der Wert des Gap-Exponenten derOberfläche wurde mit $Delta_1$=0.459(13) in Übereinstimmungmit den Literaturwerten abgeschätzt. Die Observablen Magnetisierung und magnetischeSuszeptibilität sowie deren auf die Oberfläche bezogenenEntsprechungen zeigen kein reines zweidimensionaleskritisches Verhalten. Zu ihrer Beschreibung in der Nähe deskritischen Punktes wurden effektive Exponenten für dieeinzelnen Schichtdicken bestimmt.
Resumo:
A sample scanning confocal optical microscope (SCOM) was designed and constructed in order to perform local measurements of fluorescence, light scattering and Raman scattering. This instrument allows to measure time resolved fluorescence, Raman scattering and light scattering from the same diffraction limited spot. Fluorescence from single molecules and light scattering from metallic nanoparticles can be studied. First, the electric field distribution in the focus of the SCOM was modelled. This enables the design of illumination modes for different purposes, such as the determination of the three-dimensional orientation of single chromophores. Second, a method for the calculation of the de-excitation rates of a chromophore was presented. This permits to compare different detection schemes and experimental geometries in order to optimize the collection of fluorescence photons. Both methods were combined to calculate the SCOM fluorescence signal of a chromophore in a general layered system. The fluorescence excitation and emission of single molecules through a thin gold film was investigated experimentally and modelled. It was demonstrated that, due to the mediation of surface plasmons, single molecule fluorescence near a thin gold film can be excited and detected with an epi-illumination scheme through the film. Single molecule fluorescence as close as 15nm to the gold film was studied in this manner. The fluorescence dynamics (fluorescence blinking and excited state lifetime) of single molecules was studied in the presence and in the absence of a nearby gold film in order to investigate the influence of the metal on the electronic transition rates. The trace-histogram and the autocorrelation methods for the analysis of single molecule fluorescence blinking were presented and compared via the analysis of Monte-Carlo simulated data. The nearby gold influences the total decay rate in agreement to theory. The gold presence produced no influence on the ISC rate from the excited state to the triplet but increased by a factor of 2 the transition rate from the triplet to the singlet ground state. The photoluminescence blinking of Zn0.42Cd0.58Se QDs on glass and ITO substrates was investigated experimentally as a function of the excitation power (P) and modelled via Monte-Carlo simulations. At low P, it was observed that the probability of a certain on- or off-time follows a negative power-law with exponent near to 1.6. As P increased, the on-time fraction reduced on both substrates whereas the off-times did not change. A weak residual memory effect between consecutive on-times and consecutive off-times was observed but not between an on-time and the adjacent off-time. All of this suggests the presence of two independent mechanisms governing the lifetimes of the on- and off-states. The simulated data showed Poisson-distributed off- and on-intensities, demonstrating that the observed non-Poissonian on-intensity distribution of the QDs is not a product of the underlying power-law probability and that the blinking of QDs occurs between a non-emitting off-state and a distribution of emitting on-states with different intensities. All the experimentally observed photo-induced effects could be accounted for by introducing a characteristic lifetime tPI of the on-state in the simulations. The QDs on glass presented a tPI proportional to P-1 suggesting the presence of a one-photon process. Light scattering images and spectra of colloidal and C-shaped gold nano-particles were acquired. The minimum size of a metallic scatterer detectable with the SCOM lies around 20 nm.
Resumo:
La questione energetica ha assunto, negli ultimi anni, un ruolo centrale nel dibattito mondiale in relazione a quattro fattori principali: la non riproducibilità delle risorse naturali, l’aumento esponenziale dei consumi, gli interessi economici e la salvaguardia dell'equilibrio ambientale e climatico del nostro Pianeta. E’ necessario, dunque, cambiare il modello di produzione e consumo dell’energia soprattutto nelle città, dove si ha la massima concentrazione dei consumi energetici. Per queste ragioni, il ricorso alle Fonti Energetiche Rinnovabili (FER) si configura ormai come una misura necessaria, opportuna ed urgente anche nella pianificazione urbanistica. Per migliorare la prestazione energetica complessiva del sistema città bisogna implementare politiche di governo delle trasformazioni che escano da una logica operativa “edificio-centrica” e ricomprendano, oltre al singolo manufatto, le aggregazioni di manufatti e le loro relazioni/ interazioni in termini di input e output materico-energetiche. La sostituzione generalizzata del patrimonio edilizio esistente con nuovi edifici iper-tecnologici, è improponibile. In che modo quindi, è possibile ridefinire la normativa e la prassi urbanistica per generare tessuti edilizi energeticamente efficienti? La presente ricerca propone l’integrazione tra la nascente pianificazione energetica del territorio e le più consolidate norme urbanistiche, nella generazione di tessuti urbani “energy saving” che aggiungano alle prestazioni energetico-ambientali dei singoli manufatti quelle del contesto, in un bilancio energetico complessivo. Questo studio, dopo aver descritto e confrontato le principali FER oggi disponibili, suggerisce una metodologia per una valutazione preliminare del mix di tecnologie e di FER più adatto per ciascun sito configurato come “distretto energetico”. I risultati di tale processo forniscono gli elementi basilari per predisporre le azioni necessarie all’integrazione della materia energetica nei Piani Urbanistici attraverso l’applicazione dei principi della perequazione nella definizione di requisiti prestazionali alla scala insediativa, indispensabili per un corretto passaggio alla progettazione degli “oggetti” e dei “sistemi” urbani.
Resumo:
Nella prima parte viene ricostruito il concetto di vincolo espropriativo alla luce dell’elaborazione della giurisprudenza della Corte costituzionale e della Corte EDU, giungendo alla conclusione che rientrano in tale concetto le limitazioni al diritto di proprietà che: - derivano da scelte discrezionali dell’Amministrazione non correlate alle caratteristiche oggettive del bene; - superano la normale tollerabilità nel senso che impediscono al proprietario la prosecuzione dell’uso in essere o incidono sul valore di mercato del bene in modo sproporzionato rispetto alle oggettive caratteristiche del bene e all’interesse pubblico perseguito. Ragione di fondo della teoria dei vincoli è censurare l’eccessiva discrezionalità del potere urbanistico, imponendo una maggiore obiettività e controllabilità delle scelte urbanistiche. Dalla teoria dei vincoli consegue altresì che nell’esercizio del potere urbanistico l’Amministrazione, pur potendo differenziare il territorio, deve perseguire l’obiettivo del riequilibrio economico degli interessi incisi dalle sue determinazioni. L’obbligo della corresponsione dell’indennizzo costituisce la prima forma di perequazione urbanistica. Nel terzo e nel quarto capitolo viene analizzata la giurisprudenza civile e amministrativa in tema di vincoli urbanistici, rilevandone la non corrispondenza rispetto all’elaborazione della Corte costituzionale e l’incongruità dei risultati applicativi. Si evidenzia in particolare la necessità del superamento del criterio basato sulla distinzione zonizzazioni-localizzazioni e di considerare conformative unicamente quelle destinazioni realizzabili ad iniziativa privata che in concreto consentano al proprietario di conseguire un’utilità economica proporzionata al valore di mercato del bene. Nel quinto capitolo viene analizzato il rapporto tra teoria dei vincoli e perequazione urbanistica, individuandosi il discrimine tra i due diversi istituti non solo nel consenso, ma anche nella proporzionalità delle reciproche prestazioni negoziali. Attraverso la perequazione non può essere attribuito al proprietario un’utilità inferiore a quella che gli deriverebbe dall’indennità di esproprio.
Resumo:
The aim of my thesis is to parallelize the Weighting Histogram Analysis Method (WHAM), which is a popular algorithm used to calculate the Free Energy of a molucular system in Molecular Dynamics simulations. WHAM works in post processing in cooperation with another algorithm called Umbrella Sampling. Umbrella Sampling has the purpose to add a biasing in the potential energy of the system in order to force the system to sample a specific region in the configurational space. Several N independent simulations are performed in order to sample all the region of interest. Subsequently, the WHAM algorithm is used to estimate the original system energy starting from the N atomic trajectories. The parallelization of WHAM has been performed through CUDA, a language that allows to work in GPUs of NVIDIA graphic cards, which have a parallel achitecture. The parallel implementation may sensibly speed up the WHAM execution compared to previous serial CPU imlementations. However, the WHAM CPU code presents some temporal criticalities to very high numbers of interactions. The algorithm has been written in C++ and executed in UNIX systems provided with NVIDIA graphic cards. The results were satisfying obtaining an increase of performances when the model was executed on graphics cards with compute capability greater. Nonetheless, the GPUs used to test the algorithm is quite old and not designated for scientific calculations. It is likely that a further performance increase will be obtained if the algorithm would be executed in clusters of GPU at high level of computational efficiency. The thesis is organized in the following way: I will first describe the mathematical formulation of Umbrella Sampling and WHAM algorithm with their apllications in the study of ionic channels and in Molecular Docking (Chapter 1); then, I will present the CUDA architectures used to implement the model (Chapter 2); and finally, the results obtained on model systems will be presented (Chapter 3).
Resumo:
Spatial prediction of hourly rainfall via radar calibration is addressed. The change of support problem (COSP), arising when the spatial supports of different data sources do not coincide, is faced in a non-Gaussian setting; in fact, hourly rainfall in Emilia-Romagna region, in Italy, is characterized by abundance of zero values and right-skeweness of the distribution of positive amounts. Rain gauge direct measurements on sparsely distributed locations and hourly cumulated radar grids are provided by the ARPA-SIMC Emilia-Romagna. We propose a three-stage Bayesian hierarchical model for radar calibration, exploiting rain gauges as reference measure. Rain probability and amounts are modeled via linear relationships with radar in the log scale; spatial correlated Gaussian effects capture the residual information. We employ a probit link for rainfall probability and Gamma distribution for rainfall positive amounts; the two steps are joined via a two-part semicontinuous model. Three model specifications differently addressing COSP are presented; in particular, a stochastic weighting of all radar pixels, driven by a latent Gaussian process defined on the grid, is employed. Estimation is performed via MCMC procedures implemented in C, linked to R software. Communication and evaluation of probabilistic, point and interval predictions is investigated. A non-randomized PIT histogram is proposed for correctly assessing calibration and coverage of two-part semicontinuous models. Predictions obtained with the different model specifications are evaluated via graphical tools (Reliability Plot, Sharpness Histogram, PIT Histogram, Brier Score Plot and Quantile Decomposition Plot), proper scoring rules (Brier Score, Continuous Rank Probability Score) and consistent scoring functions (Root Mean Square Error and Mean Absolute Error addressing the predictive mean and median, respectively). Calibration is reached and the inclusion of neighbouring information slightly improves predictions. All specifications outperform a benchmark model with incorrelated effects, confirming the relevance of spatial correlation for modeling rainfall probability and accumulation.
Resumo:
Lattice Quantum Chromodynamics (LQCD) is the preferred tool for obtaining non-perturbative results from QCD in the low-energy regime. It has by nowrnentered the era in which high precision calculations for a number of phenomenologically relevant observables at the physical point, with dynamical quark degrees of freedom and controlled systematics, become feasible. Despite these successes there are still quantities where control of systematic effects is insufficient. The subject of this thesis is the exploration of the potential of todays state-of-the-art simulation algorithms for non-perturbativelyrn$\mathcal{O}(a)$-improved Wilson fermions to produce reliable results in thernchiral regime and at the physical point both for zero and non-zero temperature. Important in this context is the control over the chiral extrapolation. Thisrnthesis is concerned with two particular topics, namely the computation of hadronic form factors at zero temperature, and the properties of the phaserntransition in the chiral limit of two-flavour QCD.rnrnThe electromagnetic iso-vector form factor of the pion provides a platform to study systematic effects and the chiral extrapolation for observables connected to the structure of mesons (and baryons). Mesonic form factors are computationally simpler than their baryonic counterparts but share most of the systematic effects. This thesis contains a comprehensive study of the form factor in the regime of low momentum transfer $q^2$, where the form factor is connected to the charge radius of the pion. A particular emphasis is on the region very close to $q^2=0$ which has not been explored so far, neither in experiment nor in LQCD. The results for the form factor close the gap between the smallest spacelike $q^2$-value available so far and $q^2=0$, and reach an unprecedented accuracy at full control over the main systematic effects. This enables the model-independent extraction of the pion charge radius. The results for the form factor and the charge radius are used to test chiral perturbation theory ($\chi$PT) and are thereby extrapolated to the physical point and the continuum. The final result in units of the hadronic radius $r_0$ is rn$$ \left\langle r_\pi^2 \right\rangle^{\rm phys}/r_0^2 = 1.87 \: \left(^{+12}_{-10}\right)\left(^{+\:4}_{-15}\right) \quad \textnormal{or} \quad \left\langle r_\pi^2 \right\rangle^{\rm phys} = 0.473 \: \left(^{+30}_{-26}\right)\left(^{+10}_{-38}\right)(10) \: \textnormal{fm} \;, $$rn which agrees well with the results from other measurements in LQCD and experiment. Note, that this is the first continuum extrapolated result for the charge radius from LQCD which has been extracted from measurements of the form factor in the region of small $q^2$.rnrnThe order of the phase transition in the chiral limit of two-flavour QCD and the associated transition temperature are the last unkown features of the phase diagram at zero chemical potential. The two possible scenarios are a second order transition in the $O(4)$-universality class or a first order transition. Since direct simulations in the chiral limit are not possible the transition can only be investigated by simulating at non-zero quark mass with a subsequent chiral extrapolation, guided by the universal scaling in the vicinity of the critical point. The thesis presents the setup and first results from a study on this topic. The study provides the ideal platform to test the potential and limits of todays simulation algorithms at finite temperature. The results from a first scan at a constant zero-temperature pion mass of about 290~MeV are promising, and it appears that simulations down to physical quark masses are feasible. Of particular relevance for the order of the chiral transition is the strength of the anomalous breaking of the $U_A(1)$ symmetry at the transition point. It can be studied by looking at the degeneracies of the correlation functions in scalar and pseudoscalar channels. For the temperature scan reported in this thesis the breaking is still pronounced in the transition region and the symmetry becomes effectively restored only above $1.16\:T_C$. The thesis also provides an extensive outline of research perspectives and includes a generalisation of the standard multi-histogram method to explicitly $\beta$-dependent fermion actions.
Resumo:
A major barrier to widespread clinical implementation of Monte Carlo dose calculation is the difficulty in characterizing the radiation source within a generalized source model. This work aims to develop a generalized three-component source model (target, primary collimator, flattening filter) for 6- and 18-MV photon beams that match full phase-space data (PSD). Subsource by subsource comparison of dose distributions, using either source PSD or the source model as input, allows accurate source characterization and has the potential to ease the commissioning procedure, since it is possible to obtain information about which subsource needs to be tuned. This source model is unique in that, compared to previous source models, it retains additional correlations among PS variables, which improves accuracy at nonstandard source-to-surface distances (SSDs). In our study, three-dimensional (3D) dose calculations were performed for SSDs ranging from 50 to 200 cm and for field sizes from 1 x 1 to 30 x 30 cm2 as well as a 10 x 10 cm2 field 5 cm off axis in each direction. The 3D dose distributions, using either full PSD or the source model as input, were compared in terms of dose-difference and distance-to-agreement. With this model, over 99% of the voxels agreed within +/-1% or 1 mm for the target, within 2% or 2 mm for the primary collimator, and within +/-2.5% or 2 mm for the flattening filter in all cases studied. For the dose distributions, 99% of the dose voxels agreed within 1% or 1 mm when the combined source model-including a charged particle source and the full PSD as input-was used. The accurate and general characterization of each photon source and knowledge of the subsource dose distributions should facilitate source model commissioning procedures by allowing scaling the histogram distributions representing the subsources to be tuned.