922 resultados para Elements, Elettrofisiologia, Acquisizione Real Time, Analisi Real Time, High Throughput Data
Resumo:
The thesis deals with the problem of Model Selection (MS) motivated by information and prediction theory, focusing on parametric time series (TS) models. The main contribution of the thesis is the extension to the multivariate case of the Misspecification-Resistant Information Criterion (MRIC), a criterion introduced recently that solves Akaike’s original research problem posed 50 years ago, which led to the definition of the AIC. The importance of MS is witnessed by the huge amount of literature devoted to it and published in scientific journals of many different disciplines. Despite such a widespread treatment, the contributions that adopt a mathematically rigorous approach are not so numerous and one of the aims of this project is to review and assess them. Chapter 2 discusses methodological aspects of MS from information theory. Information criteria (IC) for the i.i.d. setting are surveyed along with their asymptotic properties; and the cases of small samples, misspecification, further estimators. Chapter 3 surveys criteria for TS. IC and prediction criteria are considered for: univariate models (AR, ARMA) in the time and frequency domain, parametric multivariate (VARMA, VAR); nonparametric nonlinear (NAR); and high-dimensional models. The MRIC answers Akaike’s original question on efficient criteria, for possibly-misspecified (PM) univariate TS models in multi-step prediction with high-dimensional data and nonlinear models. Chapter 4 extends the MRIC to PM multivariate TS models for multi-step prediction introducing the Vectorial MRIC (VMRIC). We show that the VMRIC is asymptotically efficient by proving the decomposition of the MSPE matrix and the consistency of its Method-of-Moments Estimator (MoME), for Least Squares multi-step prediction with univariate regressor. Chapter 5 extends the VMRIC to the general multiple regressor case, by showing that the MSPE matrix decomposition holds, obtaining consistency for its MoME, and proving its efficiency. The chapter concludes with a digression on the conditions for PM VARX models.
Resumo:
Astrocytes are the most numerous glial cell type in the mammalian brain and permeate the entire CNS interacting with neurons, vasculature, and other glial cells. Astrocytes display intracellular calcium signals that encode information about local synaptic function, distributed network activity, and high-level cognitive functions. Several studies have investigated the calcium dynamics of astrocytes in sensory areas and have shown that these cells can encode sensory stimuli. Nevertheless, only recently the neuro-scientific community has focused its attention on the role and functions of astrocytes in associative areas such as the hippocampus. In our first study, we used the information theory formalism to show that astrocytes in the CA1 area of the hippocampus recorded with 2-photon fluorescence microscopy during spatial navigation encode spatial information that is complementary and synergistic to information encoded by nearby "place cell" neurons. In our second study, we investigated various computational aspects of applying the information theory formalism to astrocytic calcium data. For this reason, we generated realistic simulations of calcium signals in astrocytes to determine optimal hyperparameters and procedures of information measures and applied them to real astrocytic calcium imaging data. Calcium signals of astrocytes are characterized by complex spatiotemporal dynamics occurring in subcellular parcels of the astrocytic domain which makes studying these cells in 2-photon calcium imaging recordings difficult. However, current analytical tools which identify the astrocytic subcellular regions are time consuming and extensively rely on user-defined parameters. Here, we present Rapid Astrocytic calcium Spatio-Temporal Analysis (RASTA), a novel machine learning algorithm for spatiotemporal semantic segmentation of 2-photon calcium imaging recordings of astrocytes which operates without human intervention. We found that RASTA provided fast and accurate identification of astrocytic cell somata, processes, and cellular domains, extracting calcium signals from identified regions of interest across individual cells and populations of hundreds of astrocytes recorded in awake mice.
Resumo:
This thesis deals with the analysis and management of emergency healthcare processes through the use of advanced analytics and optimization approaches. Emergency processes are among the most complex within healthcare. This is due to their non-elective nature and their high variability. This thesis is divided into two topics. The first one concerns the core of emergency healthcare processes, the emergency department (ED). In the second chapter, we describe the ED that is the case study. This is a real case study with data derived from a large ED located in northern Italy. In the next two chapters, we introduce two tools for supporting ED activities. The first one is a new type of analytics model. Its aim is to overcome the traditional methods of analyzing the activities provided in the ED by means of an algorithm that analyses the ED pathway (organized as event log) as a whole. The second tool is a decision-support system, which integrates a deep neural network for the prediction of patient pathways, and an online simulator to evaluate the evolution of the ED over time. Its purpose is to provide a set of solutions to prevent and solve the problem of the ED overcrowding. The second part of the thesis focuses on the COVID-19 pandemic emergency. In the fifth chapter, we describe a tool that was used by the Bologna local health authority in the first part of the pandemic. Its purpose is to analyze the clinical pathway of a patient and from this automatically assign them a state. Physicians used the state for routing the patients to the correct clinical pathways. The last chapter is dedicated to the description of a MIP model, which was used for the organization of the COVID-19 vaccination campaign in the city of Bologna, Italy.
Resumo:
The Structural Health Monitoring (SHM) research area is increasingly investigated due to its high potential in reducing the maintenance costs and in ensuring the systems safety in several industrial application fields. A growing demand of new SHM systems, permanently embedded into the structures, for savings in weight and cabling, comes from the aeronautical and aerospace application fields. As consequence, the embedded electronic devices are to be wirelessly connected and battery powered. As result, a low power consumption is requested. At the same time, high performance in defects or impacts detection and localization are to be ensured to assess the structural integrity. To achieve these goals, the design paradigms can be changed together with the associate signal processing. The present thesis proposes design strategies and unconventional solutions, suitable both for real-time monitoring and periodic inspections, relying on piezo-transducers and Ultrasonic Guided Waves. In the first context, arrays of closely located sensors were designed, according to appropriate optimality criteria, by exploiting sensors re-shaping and optimal positioning, to achieve improved damages/impacts localisation performance in noisy environments. An additional sensor re-shaping procedure was developed to tackle another well-known issue which arises in realistic scenario, namely the reverberation. A novel sensor, able to filter undesired mechanical boundaries reflections, was validated via simulations based on the Green's functions formalism and FEM. In the active SHM context, a novel design methodology was used to develop a single transducer, called Spectrum-Scanning Acoustic Transducer, to actively inspect a structure. It can estimate the number of defects and their distances with an accuracy of 2[cm]. It can also estimate the damage angular coordinate with an equivalent mainlobe aperture of 8[deg], when a 24[cm] radial gap between two defects is ensured. A suitable signal processing was developed in order to limit the computational cost, allowing its use with embedded electronic devices.
Resumo:
In this thesis, we investigate the role of applied physics in epidemiological surveillance through the application of mathematical models, network science and machine learning. The spread of a communicable disease depends on many biological, social, and health factors. The large masses of data available make it possible, on the one hand, to monitor the evolution and spread of pathogenic organisms; on the other hand, to study the behavior of people, their opinions and habits. Presented here are three lines of research in which an attempt was made to solve real epidemiological problems through data analysis and the use of statistical and mathematical models. In Chapter 1, we applied language-inspired Deep Learning models to transform influenza protein sequences into vectors encoding their information content. We then attempted to reconstruct the antigenic properties of different viral strains using regression models and to identify the mutations responsible for vaccine escape. In Chapter 2, we constructed a compartmental model to describe the spread of a bacterium within a hospital ward. The model was informed and validated on time series of clinical measurements, and a sensitivity analysis was used to assess the impact of different control measures. Finally (Chapter 3) we reconstructed the network of retweets among COVID-19 themed Twitter users in the early months of the SARS-CoV-2 pandemic. By means of community detection algorithms and centrality measures, we characterized users’ attention shifts in the network, showing that scientific communities, initially the most retweeted, lost influence over time to national political communities. In the Conclusion, we highlighted the importance of the work done in light of the main contemporary challenges for epidemiological surveillance. In particular, we present reflections on the importance of nowcasting and forecasting, the relationship between data and scientific research, and the need to unite the different scales of epidemiological surveillance.
Resumo:
In recent years, energy modernization has focused on smart engineering advancements. This entails designing complicated software and hardware for variable-voltage digital substations. A digital substation consists of electrical and auxiliary devices, control and monitoring devices, computers, and control software. Intelligent measurement systems use digital instrument transformers and IEC 61850-compliant information exchange protocols in digital substations. Digital instrument transformers used for real-time high-voltage measurements should combine advanced digital, measuring, information, and communication technologies. Digital instrument transformers should be cheap, small, light, and fire- and explosion-safe. These smaller and lighter transformers allow long-distance transmission of an optical signal that gauges direct or alternating current. Cost-prohibitive optical converters are a problem. To improve the tool's accuracy, amorphous alloys are used in the magnetic circuits and compensating feedback. Large-scale voltage converters can be made cheaper by using resistive, capacitive, or hybrid voltage dividers. In known electronic voltage transformers, the voltage divider output is generally on the low-voltage side, facilitating power supply organization. Combining current and voltage transformers reduces equipment size, installation, and maintenance costs. These two gadgets cost less together than individually. To increase commercial power metering accuracy, current and voltage converters should be included into digital instrument transformers so that simultaneous analogue-to-digital samples are obtained. Multichannel ADC microcircuits with synchronous conversion start allow natural parallel sample drawing. Digital instrument transformers are created adaptable to substation operating circumstances and environmental variables, especially ambient temperature. An embedded microprocessor auto-diagnoses and auto-calibrates the proposed digital instrument transformer.
Resumo:
Context. Our understanding of the chemical evolution (CE) of the Galactic bulge requires the determination of abundances in large samples of giant stars and planetary nebulae (PNe). Studies based on high resolution spectroscopy of giant stars in several fields of the Galactic bulge obtained with very large telescopes have allowed important progress. Aims. We discuss PNe abundances in the Galactic bulge and compare these results with those presented in the literature for giant stars. Methods. We present the largest, high-quality data-set available for PNe in the direction of the Galactic bulge (inner-disk/bulge). For comparison purposes, we also consider a sample of PNe in the Large Magellanic Cloud (LMC). We derive the element abundances in a consistent way for all the PNe studied. By comparing the abundances for the bulge, inner-disk, and LMC, we identify elements that have not been modified during the evolution of the PN progenitor and can be used to trace the bulge chemical enrichment history. We then compare the PN abundances with abundances of bulge field giant. Results. At the metallicity of the bulge, we find that the abundances of O and Ne are close to the values for the interstellar medium at the time of the PN progenitor formation, and hence these elements can be used as tracers of the bulge CE, in the same way as S and Ar, which are not expected to be affected by nucleosynthetic processes during the evolution of the PN progenitors. The PN oxygen abundance distribution is shifted to lower values by 0.3 dex with respect to the distribution given by giants. A similar shift appears to occur for Ne and S. We discuss possible reasons for this PNe-giant discrepancy and conclude that this is probably due to systematic errors in the abundance derivations in either giants or PNe (or both). We issue an important warning concerning the use of absolute abundances in CE studies.
Resumo:
A Política Nacional de Atenção Integral à Saúde do Homem propõe formas diferenciadas de atuação da equipe de saúde no atendimento da população masculina, uma vez que este público demanda estratégias diferenciadas de serviço. Ao se deparar com a realidade vivenciada pelos homens, surgem várias questões acerca do estereótipo social construído acerca das características masculinas e suas vivências. O objetivo da pesquisa foi compreender alguns aspectos relevantes para as práticas de saúde de homens usuários de Unidade de Saúde da Família, como qualidade de vida, consumo de álcool, representações sociais da bebida alcoólica e características de masculinidade. Foi utilizada uma amostra de 300 homens, frequentadores de Unidade de Saúde da Família, e aplicado um questionário contendo os dados sociodemográficos, o World Health Organization Quality of Life (Whoqol-bref), o Bem Sex-Role Inventory (BSRI), um exercício de evocação sobre bebida alcoólica, o Alcohol Use Disorders Identification Test (Audit) e um bloco para verificar os problemas ocasionados pelo consumo de álcool e a procura por tratamento. Os dados dos instrumentos quantitativos foram analisados com testes estatísticos de comparação de médias e de correlação. Os dados das evocações foram analisados com o software EVOC (Ensemble de Programmes Permettant l’Analyse des Évocations). Na primeira análise, constatou-se adesão mais alta a características femininas, alta percepção de qualidade de vida e padrões de consumo de álcool semelhantes às médias nacionais. Homens que declararam praticar sua religião apresentaram média significativamente menor de consumo de álcool. Apresentaram correlação inversamente proporcional ao consumo de álcool as características femininas de gênero, os domínios físico, social, psicológico e percepção global de qualidade de vida. Na análise das evocações, constatou-se que os elementos com tendência à centralidade são, em sua maioria, de cunho negativo. Os dados da população geral apresentaram o termo gosto como um aspecto positivo e central da bebida alcoólica. O grupo de abstinentes não apresentou avaliação positiva do termo e o grupo de bebedores apresentou o termo diversão na primeira periferia, referindo-se aos aspectos positivos e de socialização da bebida alcoólica. Os resultados indicaram uma qualidade de vida satisfatória, a religião e as características femininas destacaram-se como um fator de proteção ao uso de bebida alcoólica. Apresentaram, ainda, percepção dos problemas associados ao próprio consumo. Apesar de a maioria dos termos relacionados à bebida alcoólica ser negativo, este consumo ainda se dá em um nível considerável. Por isso, se faz necessária a construção de vínculo entre o profissional e o usuário do serviço de saúde a fim de dar oportunidade para que as reais práticas sobre a bebida alcoólica sejam evidenciadas. Esses dados podem ajudar profissionais de Saúde da Família a refletirem sobre as representações sociais que constroem acerca dos homens de classe popular usuários do serviço
Resumo:
Dissertação para obtenção do Grau de Doutor em Informática
Resumo:
Tese de Doutoramento em Ciências da Educação (área de especilização em Desenvolvimento Curricular).
Resumo:
With the availability of new generation sequencing technologies, bacterial genome projects have undergone a major boost. Still, chromosome completion needs a costly and time-consuming gap closure, especially when containing highly repetitive elements. However, incomplete genome data may be sufficiently informative to derive the pursued information. For emerging pathogens, i.e. newly identified pathogens, lack of release of genome data during gap closure stage is clearly medically counterproductive. We thus investigated the feasibility of a dirty genome approach, i.e. the release of unfinished genome sequences to develop serological diagnostic tools. We showed that almost the whole genome sequence of the emerging pathogen Parachlamydia acanthamoebae was retrieved even with relatively short reads from Genome Sequencer 20 and Solexa. The bacterial proteome was analyzed to select immunogenic proteins, which were then expressed and used to elaborate the first steps of an ELISA. This work constitutes the proof of principle for a dirty genome approach, i.e. the use of unfinished genome sequences of pathogenic bacteria, coupled with proteomics to rapidly identify new immunogenic proteins useful to develop in the future specific diagnostic tests such as ELISA, immunohistochemistry and direct antigen detection. Although applied here to an emerging pathogen, this combined dirty genome sequencing/proteomic approach may be used for any pathogen for which better diagnostics are needed. These genome sequences may also be very useful to develop DNA based diagnostic tests. All these diagnostic tools will allow further evaluations of the pathogenic potential of this obligate intracellular bacterium.
Resumo:
Lanthanides represent the chemical elements from lanthanum to lutetium. They intrinsically exhibit some very exciting photophysical properties, which can be further enhanced by incorporating the lanthanide ion into organic or inorganic sensitizing structures. A very popular approach is to conjugate the lanthanide ion to an organic chromophore structure forming lanthanide chelates. Another approach, which has quickly gained interest, is to incorporate the lanthanide ions into nanoparticle structures, thus attaining improved specific activity and binding capacity. The lanthanide-based reporters usually express strong luminescence emission, multiple narrow emission lines covering a wide wavelength range, and exceptionally long excited state lifetimes enabling timeresolved detection. Because of these properties, the lanthanide-based reporters have found widespread applications in various fields of life. This study focuses on the field of bioanalytical applications. The aim of the study was to demonstrate the utility of different lanthanide-based reporters in homogeneous Förster resonance energy transfer (FRET)-based bioaffinity assays. Several different model assays were constructed. One was a competitive bioaffinity assay that utilized energy transfer from lanthanide chelate donors to fluorescent protein acceptors. In addition to the conventional FRET phenomenon, a recently discovered non-overlapping FRET (nFRET) phenomenon was demonstrated for the first time for fluorescent proteins. The lack of spectral overlap in the nFRET mechanism provides sensitivity and versatility to energy transfer-based assays. The distance and temperature dependence of these phenomena were further studied in a DNA-hybridization assay. The distance dependence of nFRET deviated from that of FRET, and unlike FRET, nFRET demonstrated clear temperature dependence. Based on these results, a possible excitation mechanism operating in nFRET was proposed. In the study, two enzyme activity assays for caspase-3 were also constructed. One of these was a fluorescence quenching-based enzyme activity assay that utilized novel inorganic particulate reporters called upconverting phosphors (UCPs) as donors. The use of UCPs enabled the construction of a simple, rather inexpensive, and easily automated assay format that had a high throughput rate. The other enzyme activity assay took advantage of another novel reporter class, the lanthanidebinding peptides (LBPs). In this assay, energy was transferred from a LBP to a green fluorescent protein (GFP). Using the LBPs it was possible to avoid the rather laborious, often poorly repeatable, and randomly positioned chemical labeling. In most of the constructed assays, time-resolved detection was used to eliminate the interfering background signal caused by autofluorescence. The improved signal-to-background ratios resulted in increased assay sensitivity, often unobtainable in homogeneous assay formats using conventional organic fluorophores. The anti-Stokes luminescence of the UCPs, however, enabled the elimination of autofluorescence even without time-gating, thus simplifying the instrument setup. Together, the studied reporters and assay formats pave the way for increasingly sensitive, simple, and easily automated bioanalytical applications.
Resumo:
The attached file is created with Scientific Workplace Latex
Resumo:
Les dinoflagellés sont des eucaryotes unicellulaires que l’on retrouve autant en eau douce qu’en milieu marin. Ils sont particulièrement connus pour causer des fleurs d’algues toxiques nommées ‘marée-rouge’, ainsi que pour leur symbiose avec les coraux et pour leur importante contribution à la fixation du carbone dans les océans. Au point de vue moléculaire, ils sont aussi connus pour leur caractéristiques nucléaires uniques, car on retrouve généralement une quantité immense d’ADN dans leurs chromosomes et ceux-ci sont empaquetés et condensés sous une forme cristalline liquide au lieu de nucléosomes. Les gènes encodés par le noyau sont souvent présents en multiples copies et arrangés en tandem et aucun élément de régulation transcriptionnelle, y compris la boite TATA, n’a encore été observé. L’organisation unique de la chromatine des dinoflagellés suggère que différentes stratégies sont nécessaires pour contrôler l’expression des gènes de ces organismes. Dans cette étude, j’ai abordé ce problème en utilisant le dinoflagellé photosynthétique Lingulodinium polyedrum comme modèle. L. polyedrum est d’un intérêt particulier, car il a plusieurs rythmes circadiens (journalier). À ce jour, toutes les études sur l’expression des gènes lors des changements circadiens ont démontrées une régulation à un niveau traductionnel. Pour mes recherches, j’ai utilisé les approches transcriptomique, protéomique et phosphoprotéomique ainsi que des études biochimiques pour donner un aperçu de la mécanique de la régulation des gènes des dinoflagellés, ceci en mettant l’accent sur l’importance de la phosphorylation du système circadien de L. polyedrum. L’absence des protéines histones et des nucléosomes est une particularité des dinoflagellés. En utilisant la technologie RNA-Seq, j’ai trouvé des séquences complètes encodant des histones et des enzymes modifiant les histones. L polyedrum exprime donc des séquences conservées codantes pour les histones, mais le niveau d’expression protéique est plus faible que les limites de détection par immunodétection de type Western. Les données de séquençage RNA-Seq ont également été utilisées pour générer un transcriptome, qui est une liste des gènes exprimés par L. polyedrum. Une recherche par homologie de séquences a d’abord été effectuée pour classifier les transcrits en diverses catégories (Gene Ontology; GO). Cette analyse a révélé une faible abondance des facteurs de transcription et une surprenante prédominance, parmi ceux-ci, des séquences à domaine Cold Shock. Chez L. polyedrum, plusieurs gènes sont répétés en tandem. Un alignement des séquences obtenues par RNA-Seq avec les copies génomiques de gènes organisés en tandem a été réalisé pour examiner la présence de transcrits polycistroniques, une hypothèse formulée pour expliquer le manque d’élément promoteur dans la région intergénique de la séquence de ces gènes. Cette analyse a également démontré une très haute conservation des séquences codantes des gènes organisés en tandem. Le transcriptome a également été utilisé pour aider à l’identification de protéines après leur séquençage par spectrométrie de masse, et une fraction enrichie en phosphoprotéines a été déterminée comme particulièrement bien adapté aux approches d’analyse à haut débit. La comparaison des phosphoprotéomes provenant de deux périodes différentes de la journée a révélée qu’une grande partie des protéines pour lesquelles l’état de phosphorylation varie avec le temps est reliées aux catégories de liaison à l’ARN et de la traduction. Le transcriptome a aussi été utilisé pour définir le spectre des kinases présentes chez L. polyedrum, qui a ensuite été utilisé pour classifier les différents peptides phosphorylés qui sont potentiellement les cibles de ces kinases. Plusieurs peptides identifiés comme étant phosphorylés par la Casein Kinase 2 (CK2), une kinase connue pour être impliquée dans l’horloge circadienne des eucaryotes, proviennent de diverses protéines de liaison à l’ARN. Pour évaluer la possibilité que quelques-unes des multiples protéines à domaine Cold Shock identifiées dans le transcriptome puissent moduler l’expression des gènes de L. polyedrum, tel qu’observé chez plusieurs autres systèmes procaryotiques et eucaryotiques, la réponse des cellules à des températures froides a été examinée. Les températures froides ont permis d’induire rapidement un enkystement, condition dans laquelle ces cellules deviennent métaboliquement inactives afin de résister aux conditions environnementales défavorables. Les changements dans le profil des phosphoprotéines seraient le facteur majeur causant la formation de kystes. Les phosphosites prédits pour être phosphorylés par la CK2 sont la classe la plus fortement réduite dans les kystes, une découverte intéressante, car le rythme de la bioluminescence confirme que l’horloge a été arrêtée dans le kyste.
Resumo:
Routine activity theory introduced by Cohen& Felson in 1979 states that criminal acts are caused due to the presenceof criminals, vic-timsand the absence of guardians in time and place. As the number of collision of these elements in place and time increases, criminal acts will also increase even if the number of criminals or civilians remains the same within the vicinity of a city. Street robbery is a typical example of routine ac-tivity theory and the occurrence of which can be predicted using routine activity theory. Agent-based models allow simulation of diversity among individuals. Therefore agent based simulation of street robbery can be used to visualize how chronological aspects of human activity influence the incidence of street robbery.The conceptual model identifies three classes of people-criminals, civilians and police with certain activity areas for each. Police exist only as agents of formal guardianship. Criminals with a tendency for crime will be in the search for their victims. Civilians without criminal tendencycan be either victims or guardians. In addition to criminal tendency, each civilian in the model has a unique set of characteristicslike wealth, employment status, ability for guardianship etc. These agents are subjected to random walk through a street environment guided by a Q –learning module and the possible outcomes are analyzed