890 resultados para Tchebyshev metrics
Resumo:
In this thesis we have developed solutions to common issues regarding widefield microscopes, facing the problem of the intensity inhomogeneity of an image and dealing with two strong limitations: the impossibility of acquiring either high detailed images representative of whole samples or deep 3D objects. First, we cope with the problem of the non-uniform distribution of the light signal inside a single image, named vignetting. In particular we proposed, for both light and fluorescent microscopy, non-parametric multi-image based methods, where the vignetting function is estimated directly from the sample without requiring any prior information. After getting flat-field corrected images, we studied how to fix the problem related to the limitation of the field of view of the camera, so to be able to acquire large areas at high magnification. To this purpose, we developed mosaicing techniques capable to work on-line. Starting from a set of overlapping images manually acquired, we validated a fast registration approach to accurately stitch together the images. Finally, we worked to virtually extend the field of view of the camera in the third dimension, with the purpose of reconstructing a single image completely in focus, stemming from objects having a relevant depth or being displaced in different focus planes. After studying the existing approaches for extending the depth of focus of the microscope, we proposed a general method that does not require any prior information. In order to compare the outcome of existing methods, different standard metrics are commonly used in literature. However, no metric is available to compare different methods in real cases. First, we validated a metric able to rank the methods as the Universal Quality Index does, but without needing any reference ground truth. Second, we proved that the approach we developed performs better in both synthetic and real cases.
Resumo:
This thesis analyses problems related to the applicability, in business environments, of Process Mining tools and techniques. The first contribution is a presentation of the state of the art of Process Mining and a characterization of companies, in terms of their "process awareness". The work continues identifying circumstance where problems can emerge: data preparation; actual mining; and results interpretation. Other problems are the configuration of parameters by not-expert users and computational complexity. We concentrate on two possible scenarios: "batch" and "on-line" Process Mining. Concerning the batch Process Mining, we first investigated the data preparation problem and we proposed a solution for the identification of the "case-ids" whenever this field is not explicitly indicated. After that, we concentrated on problems at mining time and we propose the generalization of a well-known control-flow discovery algorithm in order to exploit non instantaneous events. The usage of interval-based recording leads to an important improvement of performance. Later on, we report our work on the parameters configuration for not-expert users. We present two approaches to select the "best" parameters configuration: one is completely autonomous; the other requires human interaction to navigate a hierarchy of candidate models. Concerning the data interpretation and results evaluation, we propose two metrics: a model-to-model and a model-to-log. Finally, we present an automatic approach for the extension of a control-flow model with social information, in order to simplify the analysis of these perspectives. The second part of this thesis deals with control-flow discovery algorithms in on-line settings. We propose a formal definition of the problem, and two baseline approaches. The actual mining algorithms proposed are two: the first is the adaptation, to the control-flow discovery problem, of a frequency counting algorithm; the second constitutes a framework of models which can be used for different kinds of streams (stationary versus evolving).
Resumo:
Complex networks analysis is a very popular topic in computer science. Unfortunately this networks, extracted from different contexts, are usually very large and the analysis may be very complicated: computation of metrics on these structures could be very complex. Among all metrics we analyse the extraction of subnetworks called communities: they are groups of nodes that probably play the same role within the whole structure. Communities extraction is an interesting operation in many different fields (biology, economics,...). In this work we present a parallel community detection algorithm that can operate on networks with huge number of nodes and edges. After an introduction to graph theory and high performance computing, we will explain our design strategies and our implementation. Then, we will show some performance evaluation made on a distributed memory architectures i.e. the supercomputer IBM-BlueGene/Q "Fermi" at the CINECA supercomputing center, Italy, and we will comment our results.
Resumo:
The candidate tackled an important issue in contemporary management: the role of CSR and Sustainability. The research proposal focused on a longitudinal and inductive research, directed to specify the evolution of CSR and contribute to the new institutional theory, in particular institutional work framework, and to the relation between institutions and discourse analysis. The documental analysis covers all the evolution of CSR, focusing also on a number of important networks and associations. Some of the methodologies employed in the thesis have been employed as a consequence of data analysis, in a truly inductive research process. The thesis is composed by two section. The first section mainly describes the research process and the analyses results. The candidates employed several research methods: a longitudinal content analysis of documents, a vocabulary research with statistical metrics as cluster analysis and factor analysis, a rhetorical analysis of justifications. The second section puts in relation the analysis results with theoretical frameworks and contributions. The candidate confronted with several frameworks: Actor-Network-Theory, Institutional work and Boundary Work, Institutional Logic. Chapters are focused on different issues: a historical reconstruction of CSR; a reflection about symbolic adoption of recurrent labels; two case studies of Italian networks, in order to confront institutional and boundary works; a theoretical model of institutional change based on contradiction and institutional complexity; the application of the model to CSR and Sustainability, proposing Sustainability as a possible institutional logic.
Resumo:
Many psychophysical studies suggest that target depth and direction during reaches are processed independently, but the neurophysiological support to this view is so far limited. Here, we investigated the representation of reach depth and direction by single neurons in an area of the medial posterior parietal cortex (V6A). Single-unit activity was recorded from V6A in two Macaca fascicularis monkeys performing a fixation-to-reach task to targets at different depths and directions. We found that in a substantial percentage of V6A neurons depth and direction signals jointly influenced fixation, planning and arm movement-related activity in 3D space. While target depth and direction were equally encoded during fixation, depth tuning became stronger during arm movement planning, execution and target holding. The spatial tuning of fixation activity was often maintained across epochs, and this occurred more frequently in depth. These findings support for the first time the existence of a common neural substrate for the encoding of target depth and direction during reaching movements in the posterior parietal cortex. Present results also highlight the presence in V6A of several types of cells that process independently or jointly eye position and arm movement planning and execution signals in order to control reaches in 3D space. It is possible that depth and direction influence also the metrics of the reach action and that this effect on the reach kinematic variables can account for the spatial tuning we found in V6A neural activity. For this reason, we recorded and analyzed behavioral data when one monkey performed reaching movements in 3-D space. We evaluated how the target spatial position, in particular target depth and target direction, affected the kinematic parameters and trajectories describing the motor action properties.
Resumo:
Urban centers significantly contribute to anthropogenic air pollution, although they cover only a minor fraction of the Earth's land surface. Since the worldwide degree of urbanization is steadily increasing, the anthropogenic contribution to air pollution from urban centers is expected to become more substantial in future air quality assessments. The main objective of this thesis was to obtain a more profound insight in the dispersion and the deposition of aerosol particles from 46 individual major population centers (MPCs) as well as the regional and global influence on the atmospheric distribution of several aerosol types. For the first time, this was assessed in one model framework, for which the global model EMAC was applied with different representations of aerosol particles. First, in an approach with passive tracers and a setup in which the results depend only on the source location and the size and the solubility of the tracers, several metrics and a regional climate classification were used to quantify the major outflow pathways, both vertically and horizontally, and to compare the balance between pollution export away from and pollution build-up around the source points. Then in a more comprehensive approach, the anthropogenic emissions of key trace species were changed at the MPC locations to determine the cumulative impact of the MPC emissions on the atmospheric aerosol burdens of black carbon, particulate organic matter, sulfate, and nitrate. Ten different mono-modal passive aerosol tracers were continuously released at the same constant rate at each emission point. The results clearly showed that on average about five times more mass is advected quasi-horizontally at low levels than exported into the upper troposphere. The strength of the low-level export is mainly determined by the location of the source, while the vertical transport is mainly governed by the lifting potential and the solubility of the tracers. Similar to insoluble gas phase tracers, the low-level export of aerosol tracers is strongest at middle and high latitudes, while the regions of strongest vertical export differ between aerosol (temperate winter dry) and gas phase (tropics) tracers. The emitted mass fraction that is kept around MPCs is largest in regions where aerosol tracers have short lifetimes; this mass is also critical for assessing the impact on humans. However, the number of people who live in a strongly polluted region around urban centers depends more on the population density than on the size of the area which is affected by strong air pollution. Another major result was that fine aerosol particles (diameters smaller than 2.5 micrometer) from MPCs undergo substantial long-range transport, with about half of the emitted mass being deposited beyond 1000 km away from the source. In contrast to this diluted remote deposition, there are areas around the MPCs which experience high deposition rates, especially in regions which are frequently affected by heavy precipitation or are situated in poorly ventilated locations. Moreover, most MPC aerosol emissions are removed over land surfaces. In particular, forests experience more deposition from MPC pollutants than other land ecosystems. In addition, it was found that the generic treatment of aerosols has no substantial influence on the major conclusions drawn in this thesis. Moreover, in the more comprehensive approach, it was found that emissions of black carbon, particulate organic matter, sulfur dioxide, and nitrogen oxides from MPCs influence the atmospheric burden of various aerosol types very differently, with impacts generally being larger for secondary species, sulfate and nitrate, than for primary species, black carbon and particulate organic matter. While the changes in the burdens of sulfate, black carbon, and particulate organic matter show an almost linear response for changes in the emission strength, the formation of nitrate was found to be contingent upon many more factors, e.g., the abundance of sulfuric acid, than only upon the strength of the nitrogen oxide emissions. The generic tracer experiments were further extended to conduct the first risk assessment to obtain the cumulative risk of contamination from multiple nuclear reactor accidents on the global scale. For this, many factors had to be taken into account: the probability of major accidents, the cumulative deposition field of the radionuclide cesium-137, and a threshold value that defines contamination. By collecting the necessary data and after accounting for uncertainties, it was found that the risk is highest in western Europe, the eastern US, and in Japan, where on average contamination by major accidents is expected about every 50 years.
Resumo:
Moderne ESI-LC-MS/MS-Techniken erlauben in Verbindung mit Bottom-up-Ansätzen eine qualitative und quantitative Charakterisierung mehrerer tausend Proteine in einem einzigen Experiment. Für die labelfreie Proteinquantifizierung eignen sich besonders datenunabhängige Akquisitionsmethoden wie MSE und die IMS-Varianten HDMSE und UDMSE. Durch ihre hohe Komplexität stellen die so erfassten Daten besondere Anforderungen an die Analysesoftware. Eine quantitative Analyse der MSE/HDMSE/UDMSE-Daten blieb bislang wenigen kommerziellen Lösungen vorbehalten. rn| In der vorliegenden Arbeit wurden eine Strategie und eine Reihe neuer Methoden zur messungsübergreifenden, quantitativen Analyse labelfreier MSE/HDMSE/UDMSE-Daten entwickelt und als Software ISOQuant implementiert. Für die ersten Schritte der Datenanalyse (Featuredetektion, Peptid- und Proteinidentifikation) wird die kommerzielle Software PLGS verwendet. Anschließend werden die unabhängigen PLGS-Ergebnisse aller Messungen eines Experiments in einer relationalen Datenbank zusammengeführt und mit Hilfe der dedizierten Algorithmen (Retentionszeitalignment, Feature-Clustering, multidimensionale Normalisierung der Intensitäten, mehrstufige Datenfilterung, Proteininferenz, Umverteilung der Intensitäten geteilter Peptide, Proteinquantifizierung) überarbeitet. Durch diese Nachbearbeitung wird die Reproduzierbarkeit der qualitativen und quantitativen Ergebnisse signifikant gesteigert.rn| Um die Performance der quantitativen Datenanalyse zu evaluieren und mit anderen Lösungen zu vergleichen, wurde ein Satz von exakt definierten Hybridproteom-Proben entwickelt. Die Proben wurden mit den Methoden MSE und UDMSE erfasst, mit Progenesis QIP, synapter und ISOQuant analysiert und verglichen. Im Gegensatz zu synapter und Progenesis QIP konnte ISOQuant sowohl eine hohe Reproduzierbarkeit der Proteinidentifikation als auch eine hohe Präzision und Richtigkeit der Proteinquantifizierung erreichen.rn| Schlussfolgernd ermöglichen die vorgestellten Algorithmen und der Analyseworkflow zuverlässige und reproduzierbare quantitative Datenanalysen. Mit der Software ISOQuant wurde ein einfaches und effizientes Werkzeug für routinemäßige Hochdurchsatzanalysen labelfreier MSE/HDMSE/UDMSE-Daten entwickelt. Mit den Hybridproteom-Proben und den Bewertungsmetriken wurde ein umfassendes System zur Evaluierung quantitativer Akquisitions- und Datenanalysesysteme vorgestellt.
Resumo:
We have investigated the use of hierarchical clustering of flow cytometry data to classify samples of conventional central chondrosarcoma, a malignant cartilage forming tumor of uncertain cellular origin, according to similarities with surface marker profiles of several known cell types. Human primary chondrosarcoma cells, articular chondrocytes, mesenchymal stem cells, fibroblasts, and a panel of tumor cell lines from chondrocytic or epithelial origin were clustered based on the expression profile of eleven surface markers. For clustering, eight hierarchical clustering algorithms, three distance metrics, as well as several approaches for data preprocessing, including multivariate outlier detection, logarithmic transformation, and z-score normalization, were systematically evaluated. By selecting clustering approaches shown to give reproducible results for cluster recovery of known cell types, primary conventional central chondrosacoma cells could be grouped in two main clusters with distinctive marker expression signatures: one group clustering together with mesenchymal stem cells (CD49b-high/CD10-low/CD221-high) and a second group clustering close to fibroblasts (CD49b-low/CD10-high/CD221-low). Hierarchical clustering also revealed substantial differences between primary conventional central chondrosarcoma cells and established chondrosarcoma cell lines, with the latter not only segregating apart from primary tumor cells and normal tissue cells, but clustering together with cell lines from epithelial lineage. Our study provides a foundation for the use of hierarchical clustering applied to flow cytometry data as a powerful tool to classify samples according to marker expression patterns, which could lead to uncover new cancer subtypes.
Resumo:
An imaging biomarker that would provide for an early quantitative metric of clinical treatment response in cancer patients would provide for a paradigm shift in cancer care. Currently, nonimage based clinical outcome metrics include morphology, clinical, and laboratory parameters, however, these are obtained relatively late following treatment. Diffusion-weighted MRI (DW-MRI) holds promise for use as a cancer treatment response biomarker as it is sensitive to macromolecular and microstructural changes which can occur at the cellular level earlier than anatomical changes during therapy. Studies have shown that successful treatment of many tumor types can be detected using DW-MRI as an early increase in the apparent diffusion coefficient (ADC) values. Additionally, low pretreatment ADC values of various tumors are often predictive of better outcome. These capabilities, once validated, could provide for an important opportunity to individualize therapy thereby minimizing unnecessary systemic toxicity associated with ineffective therapies with the additional advantage of improving overall patient health care and associated costs. In this report, we provide a brief technical overview of DW-MRI acquisition protocols, quantitative image analysis approaches and review studies which have implemented DW-MRI for the purpose of early prediction of cancer treatment response.
Resumo:
Statistical shape models (SSMs) have been used widely as a basis for segmenting and interpreting complex anatomical structures. The robustness of these models are sensitive to the registration procedures, i.e., establishment of a dense correspondence across a training data set. In this work, two SSMs based on the same training data set of scoliotic vertebrae, and registration procedures were compared. The first model was constructed based on the original binary masks without applying any image pre- and post-processing, and the second was obtained by means of a feature preserving smoothing method applied to the original training data set, followed by a standard rasterization algorithm. The accuracies of the correspondences were assessed quantitatively by means of the maximum of the mean minimum distance (MMMD) and Hausdorf distance (H(D)). Anatomical validity of the models were quantified by means of three different criteria, i.e., compactness, specificity, and model generalization ability. The objective of this study was to compare quasi-identical models based on standard metrics. Preliminary results suggest that the MMMD distance and eigenvalues are not sensitive metrics for evaluating the performance and robustness of SSMs.
Resumo:
Citation metrics are commonly used as a proxy for scientific merit and relevance. Papers published in English, however, may exhibit a higher citation frequency than research articles published in other languages, though this issue has not yet been investigated from a Swiss perspective where English is not the native language.
Resumo:
Changes in marine net primary productivity (PP) and export of particulate organic carbon (EP) are projected over the 21st century with four global coupled carbon cycle-climate models. These include representations of marine ecosystems and the carbon cycle of different structure and complexity. All four models show a decrease in global mean PP and EP between 2 and 20% by 2100 relative to preindustrial conditions, for the SRES A2 emission scenario. Two different regimes for productivity changes are consistently identified in all models. The first chain of mechanisms is dominant in the low- and mid-latitude ocean and in the North Atlantic: reduced input of macro-nutrients into the euphotic zone related to enhanced stratification, reduced mixed layer depth, and slowed circulation causes a decrease in macro-nutrient concentrations and in PP and EP. The second regime is projected for parts of the Southern Ocean: an alleviation of light and/or temperature limitation leads to an increase in PP and EP as productivity is fueled by a sustained nutrient input. A region of disagreement among the models is the Arctic, where three models project an increase in PP while one model projects a decrease. Projected changes in seasonal and interannual variability are modest in most regions. Regional model skill metrics are proposed to generate multi-model mean fields that show an improved skill in representing observation-based estimates compared to a simple multi-model average. Model results are compared to recent productivity projections with three different algorithms, usually applied to infer net primary production from satellite observations.
Resumo:
Altered pressure in the developing left ventricle (LV) results in altered morphology and tissue material properties. Mechanical stress and strain may play a role in the regulating process. This study showed that confocal microscopy, three-dimensional reconstruction, and finite element analysis can provide a detailed model of stress and strain in the trabeculated embryonic heart. The method was used to test the hypothesis that end-diastolic strains are normalized after altered loading of the LV during the stages of trabecular compaction and chamber formation. Stage-29 chick LVs subjected to pressure overload and underload at stage 21 were reconstructed with full trabecular morphology from confocal images and analyzed with finite element techniques. Measured material properties and intraventricular pressures were specified in the models. The results show volume-weighted end-diastolic von Mises stress and strain averaging 50–82% higher in the trabecular tissue than in the compact wall. The volume-weighted-average stresses for the entire LV were 115, 64, and 147Pa in control, underloaded, and overloaded models, while strains were 11, 7, and 4%; thus, neither was normalized in a volume-weighted sense. Localized epicardial strains at mid-longitudinal level were similar among the three groups and to strains measured from high-resolution ultrasound images. Sensitivity analysis showed changes in material properties are more significant than changes in geometry in the overloaded strain adaptation, although resulting stress was similar in both types of adaptation. These results emphasize the importance of appropriate metrics and the role of trabecular tissue in evaluating the evolution of stress and strain in relation to pressure-induced adaptation.
Resumo:
As the number of solutions to the Einstein equations with realistic matter sources that admit closed time-like curves (CTC's) has grown drastically, it has provoked some authors [10] to call for a physical interpretation of these seemingly exotic curves that could possibly allow for causality violations. A first step in drafting a physical interpretation would be to understand how CTC's are created because the recent work of [16] has suggested that, to follow a CTC, observers must counter-rotate with the rotating matter, contrary to the currently accepted explanation that it is due to inertial frame dragging that CTC's are created. The exact link between inertialframe dragging and CTC's is investigated by simulating particle geodesics and the precession of gyroscopes along CTC's and backward in time oriented circular orbits in the van Stockum metric, known to have CTC's that could be traversal, so the van Stockum cylinder could be exploited as a time machine. This study of gyroscopeprecession, in the van Stockum metric, supports the theory that CTC's are produced by inertial frame dragging due to rotating spacetime metrics.
Resumo:
The study of animal sociality investigates the immediate and long-term consequences that a social structure has on its group members. Typically, social behavior is observed from interactions between two individuals at the dyadic level. However, a new framework for studying social behavior has emerged that allows the researcher to assess social complexity at multiple scales. Social Network Analysis has been recently applied in the field of ethology, and this novel tool enables an approach of focusing on social behavior in context of the global network rather than limited to dyadic interactions. This new technique was applied to a group of captive hamadryas baboons (Papio hamadryas hamadryas) in order to assess how overall network topology of the social group changes over time with the decline of an aging leader male. Observations on aggressive, grooming, and proximity spatial interactions were collected from three separate years in order to serve as `snapshots¿ of the current state of the group. Data on social behavior were collected from the group when the male was in prime health, when the male was at an old age, and after the male¿s death. A set of metrics was obtained from each time period for each type of social behavior and quantified a change in the patterns of interactions. The results suggest that baboon social behavior varies across context, and changes with the attributes of its individual members. Possible mechanisms for adapting to a changing social environment were also explored.