933 resultados para Set of Weak Stationary Dynamic Actions
Resumo:
A behavioral mind-set refers to the effect of performing a behavior in one situation (e.g., deciding which animals jump higher, dolphins or sea lions) on the likelihood of performing a conceptually similar behavior in subsequent, unrelated situations (e.g., deciding which of two candies to purchase). It reflects the activation and persistence of procedural knowledge. My dissertation circumscribes the construct of a behavioral mind-set and proposes a theoretical framework describing how mind-sets operate as well as their cognitive and motivational determinants. Three sets of studies investigated the role of mind-sets in different domains. The first set of studies explored the influence of making comparative judgments on subsequent decision making. Specifically, I found that making comparative judgment in one situation activates a which-to-buy mind-set that increases the willingness to decide which of two products to purchase in a later situation without considering the option of not buying anything at all. This mind-set can be activated not only by stating preferences for one of two products but also by comparing the relative attractiveness of wild animals, comparing the animals with respect to physical attributes, and estimating how similar one object is to another. Furthermore, the mind-set, once activated, influences not only purchase intentions in hypothetical situations but the actual decisions to purchase one of different types of products that are on sale after the experiment. The second set of studies investigated whether generating supportive elaborations or counterarguments in one situation will influence people’s tendency to engage in similar behavior in a subsequent, unrelated situation. I found that making supportive elaborations in one situation gives rise to a bolstering mind-set that, once activated, increases participants’ disposition to generate supportive thoughts in response to persuasive communications that they receive later and, therefore, increases the effectiveness of persuasion. Correspondingly, generating opposing arguments in an initial situation activates a counterarguing mind-set that increases the tendency to argue against the persuasive communications and decreases its effectiveness. However, a counterarguing mind-set may increase the effectiveness of persuasion if the messages are difficult to be refuted. The third set of studies distinguished between the influence of motivation on consumer behavior and the influence of a mind-set that is activated by this motivation. Specifically, I found that appetitive motivation, which naturally increases people’s tendency to acquire food products, can give rise to a cognition-based acquisition mind-set that increases people’s disposition to acquire non-food products as well. This acquisition mind-set may persist even when the appetitive motivation that gave rise to it is satiated by eating. Moreover, the disposition to acquire non-food products is not mediated by the products’ attractiveness. The studies suggest that motivation and mind-sets may independently influence consumers’ evaluation of a product and their dispositions to acquire it. Motivation is more likely to influence product evaluations whereas a mind-set is more likely to influence consumers’ acquisition dispositions. In summary, a behavioral mind-set can be activated in the process of performing a behavior. And the mind-set may influence people’s subsequent behaviors in unrelated situations in which the activated procedure is applicable. Moreover, motivation to engage in one behavior could also elicit a cognition-based mind-set, which may change people’s subsequent behaviors.
Resumo:
With the increasing complexity of today's software, the software development process is becoming highly time and resource consuming. The increasing number of software configurations, input parameters, usage scenarios, supporting platforms, external dependencies, and versions plays an important role in expanding the costs of maintaining and repairing unforeseeable software faults. To repair software faults, developers spend considerable time in identifying the scenarios leading to those faults and root-causing the problems. While software debugging remains largely manual, it is not the case with software testing and verification. The goal of this research is to improve the software development process in general, and software debugging process in particular, by devising techniques and methods for automated software debugging, which leverage the advances in automatic test case generation and replay. In this research, novel algorithms are devised to discover faulty execution paths in programs by utilizing already existing software test cases, which can be either automatically or manually generated. The execution traces, or alternatively, the sequence covers of the failing test cases are extracted. Afterwards, commonalities between these test case sequence covers are extracted, processed, analyzed, and then presented to the developers in the form of subsequences that may be causing the fault. The hypothesis is that code sequences that are shared between a number of faulty test cases for the same reason resemble the faulty execution path, and hence, the search space for the faulty execution path can be narrowed down by using a large number of test cases. To achieve this goal, an efficient algorithm is implemented for finding common subsequences among a set of code sequence covers. Optimization techniques are devised to generate shorter and more logical sequence covers, and to select subsequences with high likelihood of containing the root cause among the set of all possible common subsequences. A hybrid static/dynamic analysis approach is designed to trace back the common subsequences from the end to the root cause. A debugging tool is created to enable developers to use the approach, and integrate it with an existing Integrated Development Environment. The tool is also integrated with the environment's program editors so that developers can benefit from both the tool suggestions, and their source code counterparts. Finally, a comparison between the developed approach and the state-of-the-art techniques shows that developers need only to inspect a small number of lines in order to find the root cause of the fault. Furthermore, experimental evaluation shows that the algorithm optimizations lead to better results in terms of both the algorithm running time and the output subsequence length.
Resumo:
A human genome contains more than 20 000 protein-encoding genes. A human proteome, instead, has been estimated to be much more complex and dynamic. The most powerful tool to study proteins today is mass spectrometry (MS). MS based proteomics is based on the measurement of the masses of charged peptide ions in a gas-phase. The peptide amino acid sequence can be deduced, and matching proteins can be found, using software to correlate MS-data with sequence database information. Quantitative proteomics allow the estimation of the absolute or relative abundance of a certain protein in a sample. The label-free quantification methods use the intrinsic MS-peptide signals in the calculation of the quantitative values enabling the comparison of peptide signals from numerous patient samples. In this work, a quantitative MS methodology was established to study aromatase overexpressing (AROM+) male mouse liver and ovarian endometriosis tissue samples. The workflow of label-free quantitative proteomics was optimized in terms of sensitivity and robustness, allowing the quantification of 1500 proteins with a low coefficient of variance in both sample types. Additionally, five statistical methods were evaluated for the use with label-free quantitative proteomics data. The proteome data was integrated with other omics datasets, such as mRNA microarray and metabolite data sets. As a result, an altered lipid metabolism in liver was discovered in male AROM+ mice. The results suggest a reduced beta oxidation of long chain phospholipids in the liver and increased levels of pro-inflammatory fatty acids in the circulation in these mice. Conversely, in the endometriosis tissues, a set of proteins highly specific for ovarian endometrioma were discovered, many of which were under the regulation of the growth factor TGF-β1. This finding supports subsequent biomarker verification in a larger number of endometriosis patient samples.
Resumo:
In this contribution, a system identification procedure of a two-input Wiener model suitable for the analysis of the disturbance behavior of integrated nonlinear circuits is presented. The identified block model is comprised of two linear dynamic and one static nonlinear block, which are determined using an parameterized approach. In order to characterize the linear blocks, an correlation analysis using a white noise input in combination with a model reduction scheme is adopted. After having characterized the linear blocks, from the output spectrum under single tone excitation at each input a linear set of equations will be set up, whose solution gives the coefficients of the nonlinear block. By this data based black box approach, the distortion behavior of a nonlinear circuit under the influence of an interfering signal at an arbitrary input port can be determined. Such an interfering signal can be, for example, an electromagnetic interference signal which conductively couples into the port of consideration. © 2011 Author(s).
Resumo:
Robot-control designers have begun to exploit the properties of the human immune system in order to produce dynamic systems that can adapt to complex, varying, real-world tasks. Jerne’s idiotypic-network theory has proved the most popular artificial-immune-system (AIS) method for incorporation into behaviour-based robotics, since idiotypic selection produces highly adaptive responses. However, previous efforts have mostly focused on evolving the network connections and have often worked with a single, preengineered set of behaviours, limiting variability. This paper describes a method for encoding behaviours as a variable set of attributes, and shows that when the encoding is used with a genetic algorithm (GA), multiple sets of diverse behaviours can develop naturally and rapidly, providing much greater scope for flexible behaviour-selection. The algorithm is tested extensively with a simulated e-puck robot that navigates around a maze by tracking colour. Results show that highly successful behaviour sets can be generated within about 25 minutes, and that much greater diversity can be obtained when multiple autonomous populations are used, rather than a single one.
Resumo:
Over the past 15 years, the number of international development projects aimed at combating global poverty has increased significantly. Within the water and sanitation sector however, and despite heightened global attention and an increase in the number of infrastructure projects, over 800 million people remain without access to appropriate water and sanitation facilities. The majority of donor aid in the water supply and sanitation sector of developing countries is delivered through standalone projects. The quality of projects at the design and preparation stage is a critical determinant in meeting project objectives. The quality of projects at early stage of design, widely referred to as quality at entry (QAE), however remains unquantified and largely subjective. This research argues that water and sanitation infrastructure projects in the developing world tend to be designed in the absence of a specific set of actions that ensure high QAE, and consequently have relatively high rates of failure. This research analyzes 32 cases of water and sanitation infrastructure projects implemented with partial or full World Bank financing globally from 2000 – 2010. The research uses categorical data analysis, regression analysis and descriptive analysis to examine perceived linkages between project QAE and project development outcomes and determines which upstream project design factors are likely to impact the QAE of international development projects in water supply and sanitation. The research proposes a number of specific design stage actions that can be incorporated into the formal review process of water and sanitation projects financed by the World Bank or other international development partners.
Resumo:
Organismal development, homeostasis, and pathology are rooted in inherently probabilistic events. From gene expression to cellular differentiation, rates and likelihoods shape the form and function of biology. Processes ranging from growth to cancer homeostasis to reprogramming of stem cells all require transitions between distinct phenotypic states, and these occur at defined rates. Therefore, measuring the fidelity and dynamics with which such transitions occur is central to understanding natural biological phenomena and is critical for therapeutic interventions.
While these processes may produce robust population-level behaviors, decisions are made by individual cells. In certain circumstances, these minuscule computing units effectively roll dice to determine their fate. And while the 'omics' era has provided vast amounts of data on what these populations are doing en masse, the behaviors of the underlying units of these processes get washed out in averages.
Therefore, in order to understand the behavior of a sample of cells, it is critical to reveal how its underlying components, or mixture of cells in distinct states, each contribute to the overall phenotype. As such, we must first define what states exist in the population, determine what controls the stability of these states, and measure in high dimensionality the dynamics with which these cells transition between states.
To address a specific example of this general problem, we investigate the heterogeneity and dynamics of mouse embryonic stem cells (mESCs). While a number of reports have identified particular genes in ES cells that switch between 'high' and 'low' metastable expression states in culture, it remains unclear how levels of many of these regulators combine to form states in transcriptional space. Using a method called single molecule mRNA fluorescent in situ hybridization (smFISH), we quantitatively measure and fit distributions of core pluripotency regulators in single cells, identifying a wide range of variabilities between genes, but each explained by a simple model of bursty transcription. From this data, we also observed that strongly bimodal genes appear to be co-expressed, effectively limiting the occupancy of transcriptional space to two primary states across genes studied here. However, these states also appear punctuated by the conditional expression of the most highly variable genes, potentially defining smaller substates of pluripotency.
Having defined the transcriptional states, we next asked what might control their stability or persistence. Surprisingly, we found that DNA methylation, a mark normally associated with irreversible developmental progression, was itself differentially regulated between these two primary states. Furthermore, both acute or chronic inhibition of DNA methyltransferase activity led to reduced heterogeneity among the population, suggesting that metastability can be modulated by this strong epigenetic mark.
Finally, because understanding the dynamics of state transitions is fundamental to a variety of biological problems, we sought to develop a high-throughput method for the identification of cellular trajectories without the need for cell-line engineering. We achieved this by combining cell-lineage information gathered from time-lapse microscopy with endpoint smFISH for measurements of final expression states. Applying a simple mathematical framework to these lineage-tree associated expression states enables the inference of dynamic transitions. We apply our novel approach in order to infer temporal sequences of events, quantitative switching rates, and network topology among a set of ESC states.
Taken together, we identify distinct expression states in ES cells, gain fundamental insight into how a strong epigenetic modifier enforces the stability of these states, and develop and apply a new method for the identification of cellular trajectories using scalable in situ readouts of cellular state.
Resumo:
Part 2: Behaviour and Coordination
Resumo:
Doutoramento em Estudos de Desenvolvimento
Resumo:
This thesis deals with quantifying the resilience of a network of pavements. Calculations were carried out by modeling network performance under a set of possible damage-meteorological scenarios with known probability of occurrence. Resilience evaluation was performed a priori while accounting for optimal preparedness decisions and additional response actions that can be taken under each of the scenarios. Unlike the common assumption that the pre-event condition of all system components is uniform, fixed, and pristine, component condition evolution was incorporated herein. For this purpose, the health of the individual system components immediately prior to hazard event impact, under all considered scenarios, was associated with a serviceability rating. This rating was projected to reflect both natural deterioration and any intermittent improvements due to maintenance. The scheme was demonstrated for a hypothetical case study involving Laguardia Airport. Results show that resilience can be impacted by the condition of the infrastructure elements, their natural deterioration processes, and prevailing maintenance plans. The findings imply that, in general, upper bound values are reported in ordinary resilience work, and that including evolving component conditions is of value.
Resumo:
Presentation Research of the Practicum and externships has a long history and involves important aspects for analysis. For example, the recent changes taking place in university grades allot more credits to the Practicum course in all grades, and the Company-University collaboration has exposed the need to study in new learning environments. The rise of ICT practices like ePortfolios, which require technological solutions and methods supported by experimentation, study and research, require particular examination due to the dynamic momentum of technological innovation. Tutoring the Practicum and externships requires remote monitoring and communication using ePortfolios, and competence-based assessment and students’ requirement to provide evidence of learning require the best tutoring methods available with ePortfolios. Among the elements of ePortfolios, eRubrics emerge as a tool for design, communication and competence-assessment. This project aims to consolidate a research line on eRubrics, already undertaken by another project -I+D+i [EDU2010-15432]- in order to expand the network of researchers and Centres of Excellence in Spain and other countries: Harvard University in USA, University of Cologne in Germany, University of Colima in Mexico, Federal University of Parana, University of Santa Catarina in Brasil, and Stockholm University in Sweden(1). This new project [EDU2013-41974-P](2) examines the impact of eRubrics on tutoring and on assessing the Practicum course and externships. Through technology, distance tutoring grants an extra dimension to human communication. New forms of teaching with technological mediation are on the rise and are highly valuable, not only for formal education but especially in both public and private sectors of non-formal education, such as occupational training, unemployed education and public servant training. Objectives Obj. 1. To analyse models of technology used in assessing learning in the Practicum of all grades at Spanish Faculties of Education. Obj. 2. To study models of learning assessment measured by eRubrics in the Practicum. Obj. 3. To analyse communication through eRubrics between students and their tutors at university and practice centres, focusing on students’ understanding of competences and evidences to be assessed in the Practicum. Obj. 4. To design assessment services and products, in order to federate companies and practice centres with training institutions. Among many other features, it has the following functions CoRubric(3) 1. The possibility to assess people, products or services by using rubrics. 2. Ipsative assessment. 3. Designing fully flexible rubrics. 4. Drafting reports and exporting results from eRubrics in a project. 5. Students and teachers talk about the evaluation and application of the criteria Methodology, Methods, Research Instruments or Sources Used The project will use techniques to collect and analyse data from two methodological approaches: 1. In order to meet the first objective, we suggest an initial exploratory descriptive study (Buendía Eisman, Colás Bravo & Hernández Pina, 1998), which involves conducting interviews with Practicum coordinators from all educational grades across Spain, as well as analysing the contents of the teaching guides used in all educational grades across Spain. 55 academic managers were interviewed from about 10 faculties of education in public universities in Spain (20%), and course guides 376 universities from 36 public institutions in Spain (72%) are analyzed. 2. In order to satisfy the second objective, 7 universities have been selected to implement the project two instruments aimed at tutors practice centers and tutors of the faculty. All instruments for collecting data were validated by experts using the Delphi method. The selection of experts had three aspects: years of professional experience, number and quality of publications in the field (Practicum, Educational Technology and Teacher Training), and self-rating of their knowledge. The resulting data was calculated using the Coefficient of Competence (Kcomp) (Martínez, Zúñiga, Sala & Meléndez, 2012). Results in all cases showed an average experience of more than 0.09 points. The two instruments of the first objective were validated during the first half of 2014-15 year, data collected during the second half. And the second objective during the first half of 2015-16 year and data collection for the second half. The set of four instruments (two for each objective 1 and 2) have the same dimensions as each of the sources (Coordinators, course guides, tutors of practice centers and faculty) as they were: a. Institution-Organization, b. Nature of internships, c. Relationship between agents, d. Management Practicum, e. Assessment. F. Technological support, g. Training and h. Assessment Ethics. Conclusions, Expected Outcomes or Findings The first results respond to Objective 1, where we find different conclusions depending on each of the six dimensions. In the case of internal regulations governing the organization and structure of the practicum, we note that most traditional degrees (Elementary and Primary grades) share common internal rules, in particular development methodology and criteria against other grades (Pedagogy and Social Education ). It is also true that the centers of practices in last cases are very different from each other and can be a public institution, a school, a company, a museum, etc. The memory with a 56.34% and 43.67% daily activities are more demands on students in all degrees, Lesson plans 28.18% 19.72% Portfolio 26.7% Didactic units and Others 32,4%. The technical support has been mainly used the platform of the University 47.89% and 57.75% Email, followed by other services and tools 9.86% and rubric platforms 1.41%. The assessment criteria are divided between formal aspects of 12.38%, Written expresión 12.38%, treatment of the subject 14.45%, methodological rigor of work 10.32%, and Level of argument Clarity and relevance of conclusions 10.32%. In general terms, we could say that there is a trend and debate between formative assessment against a accreditation. It has not yet had sufficient time to further study and confront other dimensions and sources of information. We hope to provide more analysis and conclusions in the conference date.
Resumo:
Traditional decision making research has often focused on one's ability to choose from a set of prefixed options, ignoring the process by which decision makers generate courses of action (i.e., options) in-situ (Klein, 1993). In complex and dynamic domains, this option generation process is particularly critical to understanding how successful decisions are made (Zsambok & Klein, 1997). When generating response options for oneself to pursue (i.e., during the intervention-phase of decision making) previous research has supported quick and intuitive heuristics, such as the Take-The-First heuristic (TTF; Johnson & Raab, 2003). When generating predictive options for others in the environment (i.e., during the assessment-phase of decision making), previous research has supported the situational-model-building process described by Long Term Working Memory theory (LTWM; see Ward, Ericsson, & Williams, 2013). In the first three experiments, the claims of TTF and LTWM are tested during assessment- and intervention-phase tasks in soccer. To test what other environmental constraints may dictate the use of these cognitive mechanisms, the claims of these models are also tested in the presence and absence of time pressure. In addition to understanding the option generation process, it is important that researchers in complex and dynamic domains also develop tools that can be used by `real-world' professionals. For this reason, three more experiments were conducted to evaluate the effectiveness of a new online assessment of perceptual-cognitive skill in soccer. This test differentiated between skill groups and predicted performance on a previously established test and predicted option generation behavior. The test also outperformed domain-general cognitive tests, but not a domain-specific knowledge test when predicting skill group membership. Implications for theory and training, and future directions for the development of applied tools are discussed.
Resumo:
This thesis presents a study of the Grid data access patterns in distributed analysis in the CMS experiment at the LHC accelerator. This study ranges from the deep analysis of the historical patterns of access to the most relevant data types in CMS, to the exploitation of a supervised Machine Learning classification system to set-up a machinery able to eventually predict future data access patterns - i.e. the so-called dataset “popularity” of the CMS datasets on the Grid - with focus on specific data types. All the CMS workflows run on the Worldwide LHC Computing Grid (WCG) computing centers (Tiers), and in particular the distributed analysis systems sustains hundreds of users and applications submitted every day. These applications (or “jobs”) access different data types hosted on disk storage systems at a large set of WLCG Tiers. The detailed study of how this data is accessed, in terms of data types, hosting Tiers, and different time periods, allows to gain precious insight on storage occupancy over time and different access patterns, and ultimately to extract suggested actions based on this information (e.g. targetted disk clean-up and/or data replication). In this sense, the application of Machine Learning techniques allows to learn from past data and to gain predictability potential for the future CMS data access patterns. Chapter 1 provides an introduction to High Energy Physics at the LHC. Chapter 2 describes the CMS Computing Model, with special focus on the data management sector, also discussing the concept of dataset popularity. Chapter 3 describes the study of CMS data access patterns with different depth levels. Chapter 4 offers a brief introduction to basic machine learning concepts and gives an introduction to its application in CMS and discuss the results obtained by using this approach in the context of this thesis.
Resumo:
The attention on green building is driven by the desire to reduce a building’s running cost over its entire life cycle. However, with the use of sustainable technologies and more environmentally friendly products in the building sector, the construction industry contributes significantly to sustainable actions of our society. Different certification systems have entered the market with the aim to measure a building’s sustainability. However, each system uses its own set of criteria for the purpose of rating. The primary goal of this study is to identify a comprehensive set of criteria for the measurement of building sustainability, and therefore to facilitate the comparison of existing rating methods. The collection and analysis of the criteria, identified through a comprehensive literature review, has led to the establishment of two additional categories besides the 3 pillars of sustainability. The comparative analyses presented in this thesis reveal strengths and weaknesses of the chosen green building certification systems - LEED, BREEAM, and DGNB.
Resumo:
This study took place at one of the intercultural universities (IUs) of Mexico that serve primarily indigenous students. The IUs are pioneers in higher education despite their numerous challenges (Bertely, 1998; Dietz, 2008; Pineda & Landorf, 2010; Schmelkes, 2009). To overcome educational inequalities among their students (Ahuja, Berumen, Casillas, Crispín, Delgado et al., 2004; Schmelkes, 2009), the IUs have embraced performance-based assessment (PBA; Casillas & Santini, 2006). PBA allows a shared model of power and control related to learning and evaluation (Anderson, 1998). While conducting a review on PBA strategies of the IUs, the researcher did not find a PBA instrument with valid and reliable estimates. The purpose of this study was to develop a process to create a PBA instrument, an analytic general rubric, with acceptable validity and reliability estimates to assess students’ attainment of competencies in one of the IU’s majors, Intercultural Development Management. The Human Capabilities Approach (HCA) was the theoretical framework and a sequential mixed method (Creswell, 2003; Teddlie & Tashakkori, 2009) was the research design. IU participants created a rubric during two focus groups, and seven Spanish-speaking professors in Mexico and the US piloted using students’ research projects. The evidence that demonstrates the attainment of competencies at the IU is a complex set of actual, potential and/or desired performances or achievements, also conceptualized as “functional capabilities” (FCs; Walker, 2008), that can be used to develop a rubric. Results indicate that the rubric’s validity and reliability estimates reached acceptable estimates of 80% agreement, surpassing minimum requirements (Newman, Newman, & Newman, 2011). Implications for practice involve the use of PBA within a formative assessment framework, and dynamic inclusion of constituencies. Recommendations for further research include introducing this study’s instrument-development process to other IUs, conducting parallel mixed design studies exploring the intersection between HCA and assessment, and conducting a case study exploring assessment in intercultural settings. Education articulated through the HCA empowers students (Unterhalter & Brighouse, 2007; Walker, 2008). This study aimed to contribute to the quality of student learning assessment at the IUs by providing a participatory process to develop a PBA instrument.