933 resultados para non-trivial data structures
Resumo:
The paper presents a new network-flow interpretation of Łukasiewicz’s logic based on models with an increased effectiveness. The obtained results show that the presented network-flow models principally may work for multivalue logics with more than three states of the variables i.e. with a finite set of states in the interval from 0 to 1. The described models give the opportunity to formulate various logical functions. If the results from a given model that are contained in the obtained values of the arc flow functions are used as input data for other models then it is possible in Łukasiewicz’s logic to interpret successfully other sophisticated logical structures. The obtained models allow a research of Łukasiewicz’s logic with specific effective methods of the network-flow programming. It is possible successfully to use the specific peculiarities and the results pertaining to the function ‘traffic capacity of the network arcs’. Based on the introduced network-flow approach it is possible to interpret other multivalue logics – of E.Post, of L.Brauer, of Kolmogorov, etc.
Resumo:
Climate change is one of the most important and urgent issues of our time. Since 2006, China has overtaken the United States as the world’s largest greenhouse gas (GHG) emitter. China’s role in an international climate change solution has gained increased attention. Although much literature has addressed the functioning, performance, and implications of existing climate change mitigation policies and actions in China, there is insufficient literature that illuminates how the national climate change mitigation policies have been formulated and shaped. This research utilizes the policy network approach to explore China’s climate change mitigation policy making by examining how a variety of government, business, and civil society actors have formed networks to address environmental contexts and influence the policy outcomes and changes. The study is qualitative in nature. Three cases are selected to illustrate structural and interactive features of the specific policy network settings in shaping different policy arrangements and influencing the outcomes in the Chinese context. The three cases include the regulatory evolution of China’s climate change policy making; the country’s involvement in the Clean Development Mechanism (CDM) activity, and China’s exploration of voluntary agreement through adopting the Top-1000 Industrial Energy Conservation Program. The historical analysis of the policy process uses both primary data from interviews and fieldwork, and secondary data from relevant literature. The study finds that the Chinese central government dominates domestic climate change policy making; however, expanded action networks that involve actors at all levels have emerged in correspondence to diverse climate mitigation policy arrangements. The improved openness and accessibility of climate change policy network have contributed to its proactive engagement in promoting mitigation outcomes. In conclusion, the research suggests that the policy network approach provides a useful tool for studying China’s climate change policy making process. The involvement of various types of state and non-state actors has shaped new relations and affected the policy outcomes and changes. In addition, through the cross-case analysis, the study challenges the “fragmented authoritarianism” model and argues that this once-influential model is not appropriate in explaining new development and changes of policy making processes in contemporary China.
Resumo:
Experimental and theoretical studies regarding noise processes in various kinds of AlGaAs/GaAs heterostructures with a quantum well are reported. The measurement processes, involving a Fast Fourier Transform and analog wave analyzer in the frequency range from 10 Hz to 1 MHz, a computerized data storage and processing system, and cryostat in the temperature range from 78 K to 300 K are described in detail. The current noise spectra are obtained with the “three-point method”, using a Quan-Tech and avalanche noise source for calibration. ^ The properties of both GaAs and AlGaAs materials and field effect transistors, based on the two-dimensional electron gas in the interface quantum well, are discussed. Extensive measurements are performed in three types of heterostructures, viz., Hall structures with a large spacer layer, modulation-doped non-gated FETs, and more standard gated FETs; all structures are grown by MBE techniques. ^ The Hall structures show Lorentzian generation-recombination noise spectra with near temperature independent relaxation times. This noise is attributed to g-r processes in the 2D electron gas. For the TEGFET structures, we observe several Lorentzian g-r noise components which have strongly temperature dependent relaxation times. This noise is attributed to trapping processes in the doped AlGaAs layer. The trap level energies are determined from an Arrhenius plot of log (τT2) versus 1/T as well as from the plateau values. The theory to interpret these measurements and to extract the defect level data is reviewed and further developed. Good agreement with the data is found for all reported devices. ^
Resumo:
There is a growing societal need to address the increasing prevalence of behavioral health issues, such as obesity, alcohol or drug use, and general lack of treatment adherence for a variety of health problems. The statistics, worldwide and in the USA, are daunting. Excessive alcohol use is the third leading preventable cause of death in the United States (with 79,000 deaths annually), and is responsible for a wide range of health and social problems. On the positive side though, these behavioral health issues (and associated possible diseases) can often be prevented with relatively simple lifestyle changes, such as losing weight with a diet and/or physical exercise, or learning how to reduce alcohol consumption. Medicine has therefore started to move toward finding ways of preventively promoting wellness, rather than solely treating already established illness. Evidence-based patient-centered Brief Motivational Interviewing (BMI) interven- tions have been found particularly effective in helping people find intrinsic motivation to change problem behaviors after short counseling sessions, and to maintain healthy lifestyles over the long-term. Lack of locally available personnel well-trained in BMI, however, often limits access to successful interventions for people in need. To fill this accessibility gap, Computer-Based Interventions (CBIs) have started to emerge. Success of the CBIs, however, critically relies on insuring engagement and retention of CBI users so that they remain motivated to use these systems and come back to use them over the long term as necessary. Because of their text-only interfaces, current CBIs can therefore only express limited empathy and rapport, which are the most important factors of health interventions. Fortunately, in the last decade, computer science research has progressed in the design of simulated human characters with anthropomorphic communicative abilities. Virtual characters interact using humans’ innate communication modalities, such as facial expressions, body language, speech, and natural language understanding. By advancing research in Artificial Intelligence (AI), we can improve the ability of artificial agents to help us solve CBI problems. To facilitate successful communication and social interaction between artificial agents and human partners, it is essential that aspects of human social behavior, especially empathy and rapport, be considered when designing human-computer interfaces. Hence, the goal of the present dissertation is to provide a computational model of rapport to enhance an artificial agent’s social behavior, and to provide an experimental tool for the psychological theories shaping the model. Parts of this thesis were already published in [LYL+12, AYL12, AL13, ALYR13, LAYR13, YALR13, ALY14].
Resumo:
In order to reconstruct regional vegetation changes and local conditions during the fen-bog transition in the Borsteler Moor (northwestern Germany), a sediment core covering the period between 7.1 and 4.5 cal kyrs BP was palynologically in vestigated. The pollen diagram demonstrates the dominance of oak forests and a gradual replacement of trees by raised bog vegetation with the wetter conditions in the Late Atlantic. At ~ 6 cal kyrs BP, the non-pollen palynomorphs (NPP) demonstrate the succession from mesotrophic conditions, clearly indicated by a number of fungal spore types, to oligotrophic conditions, indicated by Sphagnum spores, Bryophytomyces sphagni, and testate amoebae Amphitrema, Assulina and Arcella, etc. Four relatively dry phases during the transition from fen to bog are clearly indicated by the dominance of Calluna and associated fungi as well as by the increase of microcharcoal. Several new NPP types are described and known NPP types are identified. All NPP are discussed in the context of their palaeoecological indicator values.
Resumo:
Peer reviewed
Resumo:
Constant technology advances have caused data explosion in recent years. Accord- ingly modern statistical and machine learning methods must be adapted to deal with complex and heterogeneous data types. This phenomenon is particularly true for an- alyzing biological data. For example DNA sequence data can be viewed as categorical variables with each nucleotide taking four different categories. The gene expression data, depending on the quantitative technology, could be continuous numbers or counts. With the advancement of high-throughput technology, the abundance of such data becomes unprecedentedly rich. Therefore efficient statistical approaches are crucial in this big data era.
Previous statistical methods for big data often aim to find low dimensional struc- tures in the observed data. For example in a factor analysis model a latent Gaussian distributed multivariate vector is assumed. With this assumption a factor model produces a low rank estimation of the covariance of the observed variables. Another example is the latent Dirichlet allocation model for documents. The mixture pro- portions of topics, represented by a Dirichlet distributed variable, is assumed. This dissertation proposes several novel extensions to the previous statistical methods that are developed to address challenges in big data. Those novel methods are applied in multiple real world applications including construction of condition specific gene co-expression networks, estimating shared topics among newsgroups, analysis of pro- moter sequences, analysis of political-economics risk data and estimating population structure from genotype data.
Resumo:
A certain type of bacterial inclusion, known as a bacterial microcompartment, was recently identified and imaged through cryo-electron tomography. A reconstructed 3D object from single-axis limited angle tilt-series cryo-electron tomography contains missing regions and this problem is known as the missing wedge problem. Due to missing regions on the reconstructed images, analyzing their 3D structures is a challenging problem. The existing methods overcome this problem by aligning and averaging several similar shaped objects. These schemes work well if the objects are symmetric and several objects with almost similar shapes and sizes are available. Since the bacterial inclusions studied here are not symmetric, are deformed, and show a wide range of shapes and sizes, the existing approaches are not appropriate. This research develops new statistical methods for analyzing geometric properties, such as volume, symmetry, aspect ratio, polyhedral structures etc., of these bacterial inclusions in presence of missing data. These methods work with deformed and non-symmetric varied shaped objects and do not necessitate multiple objects for handling the missing wedge problem. The developed methods and contributions include: (a) an improved method for manual image segmentation, (b) a new approach to 'complete' the segmented and reconstructed incomplete 3D images, (c) a polyhedral structural distance model to predict the polyhedral shapes of these microstructures, (d) a new shape descriptor for polyhedral shapes, named as polyhedron profile statistic, and (e) the Bayes classifier, linear discriminant analysis and support vector machine based classifiers for supervised incomplete polyhedral shape classification. Finally, the predicted 3D shapes for these bacterial microstructures belong to the Johnson solids family, and these shapes along with their other geometric properties are important for better understanding of their chemical and biological characteristics.
Resumo:
Bayesian methods offer a flexible and convenient probabilistic learning framework to extract interpretable knowledge from complex and structured data. Such methods can characterize dependencies among multiple levels of hidden variables and share statistical strength across heterogeneous sources. In the first part of this dissertation, we develop two dependent variational inference methods for full posterior approximation in non-conjugate Bayesian models through hierarchical mixture- and copula-based variational proposals, respectively. The proposed methods move beyond the widely used factorized approximation to the posterior and provide generic applicability to a broad class of probabilistic models with minimal model-specific derivations. In the second part of this dissertation, we design probabilistic graphical models to accommodate multimodal data, describe dynamical behaviors and account for task heterogeneity. In particular, the sparse latent factor model is able to reveal common low-dimensional structures from high-dimensional data. We demonstrate the effectiveness of the proposed statistical learning methods on both synthetic and real-world data.
Resumo:
The first objective of this research was to develop closed-form and numerical probabilistic methods of analysis that can be applied to otherwise conventional methods of unreinforced and geosynthetic reinforced slopes and walls. These probabilistic methods explicitly include random variability of soil and reinforcement, spatial variability of the soil, and cross-correlation between soil input parameters on probability of failure. The quantitative impact of simultaneously considering the influence of random and/or spatial variability in soil properties in combination with cross-correlation in soil properties is investigated for the first time in the research literature. Depending on the magnitude of these statistical descriptors, margins of safety based on conventional notions of safety may be very different from margins of safety expressed in terms of probability of failure (or reliability index). The thesis work also shows that intuitive notions of margin of safety using conventional factor of safety and probability of failure can be brought into alignment when cross-correlation between soil properties is considered in a rigorous manner. The second objective of this thesis work was to develop a general closed-form solution to compute the true probability of failure (or reliability index) of a simple linear limit state function with one load term and one resistance term expressed first in general probabilistic terms and then migrated to a LRFD format for the purpose of LRFD calibration. The formulation considers contributions to probability of failure due to model type, uncertainty in bias values, bias dependencies, uncertainty in estimates of nominal values for correlated and uncorrelated load and resistance terms, and average margin of safety expressed as the operational factor of safety (OFS). Bias is defined as the ratio of measured to predicted value. Parametric analyses were carried out to show that ignoring possible correlations between random variables can lead to conservative (safe) values of resistance factor in some cases and in other cases to non-conservative (unsafe) values. Example LRFD calibrations were carried out using different load and resistance models for the pullout internal stability limit state of steel strip and geosynthetic reinforced soil walls together with matching bias data reported in the literature.
Resumo:
The aim of this study is to explore the suitability of chromospheric images for magnetic modeling of active regions. We use high-resolutionimages (≈0.2"-0.3"), from the Interferometric Bidimensional Spectrometer in the Ca II 8542 Å line, the Rapid Oscillations in the Solar Atmosphere instrument in the Hα 6563Å line, the Interface Region Imaging Spectrograph in the 2796Å line, and compare non-potential magnetic field models obtainedfrom those chromospheric images with those obtained from images of the Atmospheric Imaging Assembly in coronal (171 Å, etc.) and inchromospheric (304 Å) wavelengths. Curvi-linear structures are automatically traced in those images with the OCCULT-2 code, to which we forward-fitted magnetic field lines computed with the Vertical-current Approximation Nonlinear Force Free Field code. We find that the chromospheric images: (1) reveal crisp curvi-linear structures (fibrils, loop segments, spicules) that are extremely well-suited for constraining magnetic modeling; (2) that these curvi-linear structures arefield-aligned with the best-fit solution by a median misalignment angle of μ2 ≈ 4°–7° (3) the free energy computed from coronal data may underestimate that obtained from chromospheric data by a factor of ≈2–4, (4) the height range of chromospheric features is confined to h≲4000 km, while coronal features are detected up to h = 35,000 km; and (5) the plasma-β parameter is β ≈ 10^-5 - 10^-1 for all traced features. We conclude that chromospheric images reveal important magnetic structures that are complementary to coronal images and need to be included in comprehensive magnetic field models, something that is currently not accomodated in standard NLFFF codes.
Resumo:
Presented in this report is an investigation of the use of "sand-lightweight" concrete in prestressed concrete structures. The sand-lightweight concrete consists of 100% sand substitution for fines, along with Idealite coarse and medium lightweight aggregate and Type I Portland Cement.
Resumo:
La majorité des situations dans lesquelles l'enfant fait ses apprentissages requièrent l'activité de lecture. Au-delà, des mots, au-delà de la simple perception physique du langage, Smith (1977) décrit la structure profonde du langage comme la signification que le lecteur donne à un texte, en plus du sens propre du texte lui-même. Pour le lecteur, le sens d'un texte comprend l'information qu'il en extrait, en plus de certaines informations qu'il en infère. En d'autres mots, le sens correspond aux réponses que le lecteur obtient à ses questions cognitives. En conséquence, la structure profonde de la langue se présente réellement comme le processus par lequel le discours est produit et compris. Elle se trouve à la fois dans la tête de l'individu et dans le texte; elle fait partie de la théorie du monde du lecteur. […]La présente recherche s'inscrit dans le modèle constructiviste de la compréhension en lecture. Son objectif propre est de donner des indices sur le développement des habiletés en compréhension chez des élèves de deuxième et quatrième années de l'école primaire. Peu de recherches s'y sont intéressées. On utilisera un protocole de recherche de type factoriel 2*2*2*2 (2 types de structures de textes * 2 niveaux de complexité des textes * 2 niveaux scolaires * 2 ordres de présentation des textes).
Resumo:
The protein lysate array is an emerging technology for quantifying the protein concentration ratios in multiple biological samples. It is gaining popularity, and has the potential to answer questions about post-translational modifications and protein pathway relationships. Statistical inference for a parametric quantification procedure has been inadequately addressed in the literature, mainly due to two challenges: the increasing dimension of the parameter space and the need to account for dependence in the data. Each chapter of this thesis addresses one of these issues. In Chapter 1, an introduction to the protein lysate array quantification is presented, followed by the motivations and goals for this thesis work. In Chapter 2, we develop a multi-step procedure for the Sigmoidal models, ensuring consistent estimation of the concentration level with full asymptotic efficiency. The results obtained in this chapter justify inferential procedures based on large-sample approximations. Simulation studies and real data analysis are used to illustrate the performance of the proposed method in finite-samples. The multi-step procedure is simpler in both theory and computation than the single-step least squares method that has been used in current practice. In Chapter 3, we introduce a new model to account for the dependence structure of the errors by a nonlinear mixed effects model. We consider a method to approximate the maximum likelihood estimator of all the parameters. Using the simulation studies on various error structures, we show that for data with non-i.i.d. errors the proposed method leads to more accurate estimates and better confidence intervals than the existing single-step least squares method.
Resumo:
This thesis describes a collection of studies into the electrical response of a III-V MOS stack comprising metal/GaGdO/GaAs layers as a function of fabrication process variables and the findings of those studies. As a result of this work, areas of improvement in the gate process module of a III-V heterostructure MOSFET were identified. Compared to traditional bulk silicon MOSFET design, one featuring a III-V channel heterostructure with a high-dielectric-constant oxide as the gate insulator provides numerous benefits, for example: the insulator can be made thicker for the same capacitance, the operating voltage can be made lower for the same current output, and improved output characteristics can be achieved without reducing the channel length further. It is known that transistors composed of III-V materials are most susceptible to damage induced by radiation and plasma processing. These devices utilise sub-10 nm gate dielectric films, which are prone to contamination, degradation and damage. Therefore, throughout the course of this work, process damage and contamination issues, as well as various techniques to mitigate or prevent those have been investigated through comparative studies of III-V MOS capacitors and transistors comprising various forms of metal gates, various thicknesses of GaGdO dielectric, and a number of GaAs-based semiconductor layer structures. Transistors which were fabricated before this work commenced, showed problems with threshold voltage control. Specifically, MOSFETs designed for normally-off (VTH > 0) operation exhibited below-zero threshold voltages. With the results obtained during this work, it was possible to gain an understanding of why the transistor threshold voltage shifts as the gate length decreases and of what pulls the threshold voltage downwards preventing normally-off device operation. Two main culprits for the negative VTH shift were found. The first was radiation damage induced by the gate metal deposition process, which can be prevented by slowing down the deposition rate. The second was the layer of gold added on top of platinum in the gate metal stack which reduces the effective work function of the whole gate due to its electronegativity properties. Since the device was designed for a platinum-only gate, this could explain the below zero VTH. This could be prevented either by using a platinum-only gate, or by matching the layer structure design and the actual gate metal used for the future devices. Post-metallisation thermal anneal was shown to mitigate both these effects. However, if post-metallisation annealing is used, care should be taken to ensure it is performed before the ohmic contacts are formed as the thermal treatment was shown to degrade the source/drain contacts. In addition, the programme of studies this thesis describes, also found that if the gate contact is deposited before the source/drain contacts, it causes a shift in threshold voltage towards negative values as the gate length decreases, because the ohmic contact anneal process affects the properties of the underlying material differently depending on whether it is covered with the gate metal or not. In terms of surface contamination; this work found that it causes device-to-device parameter variation, and a plasma clean is therefore essential. This work also demonstrated that the parasitic capacitances in the system, namely the contact periphery dependent gate-ohmic capacitance, plays a significant role in the total gate capacitance. This is true to such an extent that reducing the distance between the gate and the source/drain ohmic contacts in the device would help with shifting the threshold voltages closely towards the designed values. The findings made available by the collection of experiments performed for this work have two major applications. Firstly, these findings provide useful data in the study of the possible phenomena taking place inside the metal/GaGdO/GaAs layers and interfaces as the result of chemical processes applied to it. In addition, these findings allow recommendations as to how to best approach fabrication of devices utilising these layers.