938 resultados para Absetz, Brad: In other words
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Research literature is replete with the importance of collaboration in schools, the lack of its implementation, the centrality of the role of the principal, and the existence of a gap between knowledge and practice--or a "Knowing-Doing Gap." In other words, there is a set of knowledge that principals must know in order to create a collaborative workplace environment for teachers. This study sought to describe what high school principals know about creating such a culture of collaboration. The researcher combed journal articles, studies and professional literature in order to identify what principals must know in order to create a culture of collaboration. The result was ten elements of principal knowledge: Staff involvement in important decisions, Charismatic leadership not being necessary for success, Effective elements of teacher teams, Administrator‘s modeling professional learning, The allocation of resources, Staff meetings focused on student learning, Elements of continuous improvement, and Principles of Adult Learning, Student Learning and Change. From these ten elements, the researcher developed a web-based survey intended to measure nine of those elements (Charismatic leadership was excluded). Principals of accredited high schools in the state of Nebraska were invited to participate in this survey, as high schools are well-known for the isolation that teachers experience--particularly as a result of departmentalization. The results indicate that principals have knowledge of eight of the nine measured elements. The one that they lacked an understanding of was Principles of Student Learning. Given these two findings of what principals do and do not know, the researcher recommends that professional organizations, intermediate service agencies and district-level support staff engage in systematic and systemic initiatives to increase the knowledge of principals in the element of lacking knowledge. Further, given that eight of the nine elements are understood by principals, it would be wise to examine reasons for the implementation gap (Knowing-Doing Gap) and how to overcome it.
Resumo:
Wavelength-routed networks (WRN) are very promising candidates for next-generation Internet and telecommunication backbones. In such a network, optical-layer protection is of paramount importance due to the risk of losing large amounts of data under a failure. To protect the network against this risk, service providers usually provide a pair of risk-independent working and protection paths for each optical connection. However, the investment made for the optical-layer protection increases network cost. To reduce the capital expenditure, service providers need to efficiently utilize their network resources. Among all the existing approaches, shared-path protection has proven to be practical and cost-efficient [1]. In shared-path protection, several protection paths can share a wavelength on a fiber link if their working paths are risk-independent. In real-world networks, provisioning is usually implemented without the knowledge of future network resource utilization status. As the network changes with the addition and deletion of connections, the network utilization will become sub-optimal. Reconfiguration, which is referred to as the method of re-provisioning the existing connections, is an attractive solution to fill in the gap between the current network utilization and its optimal value [2]. In this paper, we propose a new shared-protection-path reconfiguration approach. Unlike some of previous reconfiguration approaches that alter the working paths, our approach only changes protection paths, and hence does not interfere with the ongoing services on the working paths, and is therefore risk-free. Previous studies have verified the benefits arising from the reconfiguration of existing connections [2] [3] [4]. Most of them are aimed at minimizing the total used wavelength-links or ports. However, this objective does not directly relate to cost saving because minimizing the total network resource consumption does not necessarily maximize the capability of accommodating future connections. As a result, service providers may still need to pay for early network upgrades. Alternatively, our proposed shared-protection-path reconfiguration approach is based on a load-balancing objective, which minimizes the network load distribution vector (LDV, see Section 2). This new objective is designed to postpone network upgrades, thus bringing extra cost savings to service providers. In other words, by using the new objective, service providers can establish as many connections as possible before network upgrades, resulting in increased revenue. We develop a heuristic load-balancing (LB) reconfiguration approach based on this new objective and compare its performance with an approach previously introduced in [2] and [4], whose objective is minimizing the total network resource consumption.
Resumo:
Over the past several decades, the topic of child development in a cultural context has received a great deal of theoretical and empirical investigation. Investigators from the fields of indigenous and cultural psychology have argued that childhood is socially and historically constructed, rather than a universal process with a standard sequence of developmental stages or descriptions. As a result, many psychologists have become doubtful that any stage theory of cognitive or socialemotional development can be found to be valid for all times and places. In placing more theoretical emphasis on contextual processes, they define culture as a complex system of common symbolic action patterns (or scripts) built up through everyday human social interaction by means of which individuals create common meanings and in terms of which they organize experience. Researchers understand culture to be organized and coherent, but not homogenous or static, and realize that the complex dynamic system of culture constantly undergoes transformation as participants (adults and children) negotiate and re-negotiate meanings through social interaction. These negotiations and transactions give rise to unceasing heterogeneity and variability in how different individuals and groups of individuals interpret values and meanings. However, while many psychologists—both inside and outside the fields of indigenous and cultural psychology–are now willing to give up the idea of a universal path of child development and a universal story of parenting, they have not necessarily foreclosed on the possibility of discovering and describing some universal processes that underlie socialization and development-in-context. The roots of such universalities would lie in the biological aspects of child development, in the evolutionary processes of adaptation, and in the unique symbolic and problem-solving capacities of the human organism as a culture-bearing species. For instance, according to functionalist psychological anthropologists, shared (cultural) processes surround the developing child and promote in the long view the survival of families and groups if they are to demonstrate continuity in the face of ecological change and resource competition, (e.g. Edwards & Whiting, 2004; Gallimore, Goldenberg, & Weisner, 1993; LeVine, Dixon, LeVine, Richman, Leiderman, Keefer, & Brazelton, 1994; LeVine, Miller, & West, 1988; Weisner, 1996, 2002; Whiting & Edwards, 1988; Whiting & Whiting, 1980). As LeVine and colleagues (1994) state: A population tends to share an environment, symbol systems for encoding it, and organizations and codes of conduct for adapting to it (emphasis added). It is through the enactment of these population-specific codes of conduct in locally organized practices that human adaptation occurs. Human adaptation, in other words, is largely attributable to the operation of specific social organizations (e.g. families, communities, empires) following culturally prescribed scripts (normative models) in subsistence, reproduction, and other domains [communication and social regulation]. (p. 12) It follows, then, that in seeking to understand child development in a cultural context, psychologists need to support collaborative and interdisciplinary developmental science that crosses international borders. Such research can advance cross-cultural psychology, cultural psychology, and indigenous psychology, understood as three sub-disciplines composed of scientists who frequently communicate and debate with one another and mutually inform one another’s research programs. For example, to turn to parental belief systems, the particular topic of this chapter, it is clear that collaborative international studies are needed to support the goal of crosscultural psychologists for findings that go beyond simply describing cultural differences in parental beliefs. Comparative researchers need to shed light on whether parental beliefs are (or are not) systematically related to differences in child outcomes; and they need meta-analyses and reviews to explore between- and within-culture variations in parental beliefs, with a focus on issues of social change (Saraswathi, 2000). Likewise, collaborative research programs can foster the goals of indigenous psychology and cultural psychology and lay out valid descriptions of individual development in their particular cultural contexts and the processes, principles, and critical concepts needed for defining, analyzing, and predicting outcomes of child development-in-context. The project described in this chapter is based on an approach that integrates elements of comparative methodology to serve the aim of describing particular scenarios of child development in unique contexts. The research team of cultural insiders and outsiders allows for a look at American belief systems based on a dialogue of multiple perspectives.
Resumo:
In epidemiology, the basic reproduction number R-0 is usually defined as the average number of new infections caused by a single infective individual introduced into a completely susceptible population. According to this definition. R-0 is related to the initial stage of the spreading of a contagious disease. However, from epidemiological models based on ordinary differential equations (ODE), R-0 is commonly derived from a linear stability analysis and interpreted as a bifurcation parameter: typically, when R-0 >1, the contagious disease tends to persist in the population because the endemic stationary solution is asymptotically stable: when R-0 <1, the corresponding pathogen tends to naturally disappear because the disease-free stationary solution is asymptotically stable. Here we intend to answer the following question: Do these two different approaches for calculating R-0 give the same numerical values? In other words, is the number of secondary infections caused by a unique sick individual equal to the threshold obtained from stability analysis of steady states of ODE? For finding the answer, we use a susceptibleinfective-recovered (SIR) model described in terms of ODE and also in terms of a probabilistic cellular automaton (PCA), where each individual (corresponding to a cell of the PCA lattice) is connected to others by a random network favoring local contacts. The values of R-0 obtained from both approaches are compared, showing good agreement. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Considering the ecological importance of stingless bees as caretakers and pollinators of a variety of native plants makes it necessary to improve techniques which increase of colonies' number in order to preserve these species and the biodiversity associated with them. Thus, our aim was to develop a methodology of in vitro production of stingless bee queens by offering a large quantity of food to the larvae. Our methodology consisted of determining the amount of larval food needed for the development of the queens, collecting and storing the larval food, and feeding the food to the larvae in acrylic plates. We found that the total average amount of larval food in a worker bee cell of E varia is approximately 26.70 +/- 3.55 mu L. We observed that after the consumption of extra amounts of food (25, 30, 35 and 40 mu L) the larvae differentiate into queens (n = 98). Therefore, the average total volume of food needed for the differentiation of a young larva of F. varia queen is approximately 61.70 +/- 5.00 mu L. In other words; the larvae destined to become queens eat 2.31 times more food than the ones destined to become workers. We used the species Frieseomelitta varia as a model, however the methodology can be reproduced for all species of stingless bees whose mechanism of caste differentiation depends on the amount of food ingested by the larvae. Our results demonstrate the effectiveness of the in vitro technique developed herein, pointing to the possibility of its use as a tool to assist the production of queens on a large scale. This would allow for the artificial splitting of colonies and contribute to conservation efforts in native bees.
Resumo:
Typical antbirds (similar to 209 species) represent a diverse radiation of Neotropical birds that includes many species of conservation concern. Here we present eight anonymous nuclear loci designed for the squamate antbird Myrmeciza squamosa, a species endemic to the Atlantic Forest of Brazil. We also show that those anonymous nuclear loci are amplifiable in a number of other typical antbird species from related genera (Myrmeciza, Percnostola, Gymnocichla, Myrmoborus, Pyriglena and Formicivora), including three threatened species (Myrmeciza ruficauda, Formicivora littoralis, and Pyriglena atra). Those markers will be useful not only to help management of threatened species of typical antbirds, but also to explore their evolutionary histories, both at intra and interspecific levels.
Resumo:
Premise of the study: Microsatellite loci were developed for tucuma of Amazonas (Astrocaryum aculeatum), and cross-species amplification was performed in six other Arecaceae, to investigate genetic diversity and population structure and to provide support for natural populations management. Methods and Results: Fourteen microsatellite loci were isolated from a microsatellite-enriched genomic library and used to characterize two wild populations of tucuma of Amazonas (Manaus and Manicore cities). The investigated loci displayed high polymorphism for both A. aculeatum populations, with a mean observed heterozygosity of 0.498. Amplification rates ranging from 50% to 93% were found for four Astrocaryum species and two additional species of Arecaceae. Conclusions: The information derived from the microsatellite markers developed here provides significant gains in conserved allelic richness and supports the implementation of several molecular breeding strategies for the Amazonian tucuma.
Resumo:
Objective: To investigate the prognostic significance of ST-segment elevation (STE) in aVR associated with ST-segment depression (STD) in other leads in patients with non-STE acute coronary syndrome (NSTE-ACS). Background: In NSTE-ACS patients, STD has been extensively associated with severe coronary lesions and poor outcomes. The prognostic role of STE in aVR is uncertain. Methods: We enrolled 888 consecutive patients with NSTE-ACS. They were divided into two groups according to the presence or not on admission ECG of aVR STE≥ 1mm and STD (defined as high risk ECG pattern). The primary and secondary endpoints were: in-hospital cardiovascular (CV) death and the rate of culprit left main disease (LMD). Results: Patients with high risk ECG pattern (n=121) disclosed a worse clinical profile compared to patients (n=575) without [median GRACE (Global-Registry-of-Acute-Coronary-Events) risk score =142 vs. 182, respectively]. A total of 75% of patients underwent coronary angiography. The rate of in-hospital CV death was 3.9%. On multivariable analysis patients who had the high risk ECG pattern showed an increased risk of CV death (OR=2.88, 95%CI 1.05-7.88) and culprit LMD (OR=4.67,95%CI 1.86-11.74) compared to patients who had not. The prognostic significance of the high risk ECG pattern was maintained even after adjustment for the GRACE risk score (OR = 2.28, 95%CI:1.06-4.93 and OR = 4.13, 95%CI:2.13-8.01, for primary and secondary endpoint, respectively). Conclusions: STE in aVR associated with STD in other leads predicts in-hospital CV death and culprit LMD. This pattern may add prognostic information in patients with NSTE-ACS on top of recommended scoring system.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
From the institutional point of view, the legal system of IPR (intellectual property right, hereafter, IPR) is one of incentive institutions of innovation and it plays very important role in the development of economy. According to the law, the owner of the IPR enjoy a kind of exclusive right to use his IP(intellectual property, hereafter, IP), in other words, he enjoys a kind of legal monopoly position in the market. How to well protect the IPR and at the same time to regulate the abuse of IPR is very interested topic in this knowledge-orientated market and it is the basic research question in this dissertation. In this paper, by way of comparing study and by way of law and economic analyses, and based on the Austrian Economics School’s theories, the writer claims that there is no any contradiction between the IPR and competition law. However, in this new economy (high-technology industries), there is really probability of the owner of IPR to abuse his dominant position. And with the characteristics of the new economy, such as, the high rates of innovation, “instant scalability”, network externality and lock-in effects, the IPR “will vest the dominant undertakings with the power not just to monopolize the market but to shift such power from one market to another, to create strong barriers to enter and, in so doing, granting the perpetuation of such dominance for quite a long time.”1 Therefore, in order to keep the order of market, to vitalize the competition and innovation, and to benefit the customer, in EU and US, it is common ways to apply the competition law to regulate the IPR abuse. In Austrian Economic School perspective, especially the Schumpeterian theories, the innovation/competition/monopoly and entrepreneurship are inter-correlated, therefore, we should apply the dynamic antitrust model based on the AES theories to analysis the relationship between the IPR and competition law. China is still a developing country with relative not so high ability of innovation. Therefore, at present, to protect the IPR and to make good use of the incentive mechanism of IPR legal system is the first important task for Chinese government to do. However, according to the investigation reports,2 based on their IPR advantage and capital advantage, some multinational companies really obtained the dominant or monopoly market position in some aspects of some industries, and there are some IPR abuses conducted by such multinational companies. And then, the Chinese government should be paying close attention to regulate any IPR abuse. However, how to effectively regulate the IPR abuse by way of competition law in Chinese situation, from the law and economic theories’ perspective, from the legislation perspective, and from the judicial practice perspective, there is a long way for China to go!
Resumo:
Die Arbeit behandelt die Frage nach der Reichweite der Parteiautonomie und der Kompetenz des Schiedsgerichts zur Bestimmung des anwendbaren materiellen Rechts in einem internationalen Schiedsverfahren. Im Wege eines rechtsvergleichenden Ansatz wurden die Rechtsordnungen Englands (arbitration act 1996), Frankreichs (Art. 1492 ff. Nouveau Code de Procédure Civile)und Deutschlands (10. Buch der ZPO)untersucht, im Hinblick auf die Frage, wie nichtstaatliche Regeln (lex mercatoria) behandelt werden und unter welchen Voraussetzungen sie Anwendung finden können, sei es von Seiten der Parteien oder des Schiedsgerichts. Des Weiteren wollte die Arbeit zeigen, welche der genannten Rechtsordnungen die "wettbewerbsfähigste" ist, anders gesagt, welcher es gelingt, mit der Entwicklung eines sich wahrhaft globalisierenden, internationalen Markts mitzuhalten, indem der Weg für eine Anwendung solcher Regeln so weit wie möglich geebnet wird. Starre, nationale Vorschriften werden in diesem Zusammenhang als eine Minderung der Wettbewerbsfähigkeit eines nationalen Rechts angesehen, welches sich den vorgenannten Herausforderungen stellen möchte. Das französische Recht erwies sich hierbei als das "wettbewerbsfähigste" der drei größten europäischen Wirtschaftsnationen, indem es ein geeignetes rechtliches Regelwerk für internationale Wirtschaftsangelegenheiten verwickelte Parteien aufstellt.
Resumo:
Im Mittelpunkt der Studie "The Sound of Democracy - the Sound of Freedom". Jazzrezeption in Deutschland (1945 - 1963) steht ein Korpus von 16 Oral-History-Interviews mit Zeitzeugen der deutschen Jazzszene. Interviewt wurden Musiker ebenso wie bildende Künstler, Journalisten, Clubbesitzer und Jazzfans, die die Jazzszene in den 1950ern bildeten. Die Interviews werden in einen Kontext zeitgenössischer Quellen gestellt: Zeitschriftenartikel (hauptsächlich aus dem "Jazz Podium" ebenso wie Radiomanuskripte des Bayerischen Rundfunks.rnDie Ausgangsüberlegung ist die Frage, was der Jazz für sein Publikum bedeutete, mit anderen Worten, warum wählte eine studentische, sich selbst als elitär wahrnehmende Schicht aus dem großen Fundus an kulturellen Ausdrucksformen, die nach dem Zweiten Weltkrieg aus den USA nach Deutschland strömten, ausgerechnet den Jazz als persönliche Ausdrucksform? Worin bestand seine symbolische Strahlkraft für diese jungen Menschen?rnIn Zusammenhang mit dieser Frage steht die Überlegung: In welchem Maße wurde Jazz als dezidiert amerikanische Ausdrucksform wahrgenommen und welche Amerikabilder wurden durch den Jazz transportiert? Wurde Jazz bewusst als Werkzeug der Besatzer zur demokratischen Umerziehung des deutschen Volkes eingesetzt und wenn ja, in welcher Form, beziehungsweise in welchem Maß? Wie stark war die Symbolleistung und metaphorische Bedeutung des Jazz für das deutsche Publikum und in welchem Zusammenhang steht die Symbolleistung des Jazz mit der Symbolleistung der USA als Besetzungs- bzw. Befreiungsmacht? rn
Resumo:
This PhD thesis is focused on the study of the molecular variability of some specific proteins, part of the outer membrane of the pathogen Neisseria meningitidis, and described as protective antigens and important virulence factors. These antigens have been employed as components of the vaccine developed by Novartis Vaccines against N. meningitidis of serogroup B, and their variability in the meningococcal population is a key aspect when the effect of the vaccine is evaluated. The PhD project has led to complete three major studies described in three different manuscritps, of which two have been published and the third is in preparation. The thesis is structured in three main chapters, each of them dedicated to the three studies. The first, described in Chapter 1, is specifically dedicated to the analysis of the molecular conservation of meningococcal antigens in the genomes of all species classified in the genus Neisseria (Conservation of Meningococcal Antigens in the Genus Neisseria. A. Muzzi et al.. 2013. mBio 4 (3)). The second study, described in Chapter 2, focuses on the analysis of the presence and conservation of the antigens in a panel of bacterial isolates obtained from cases of the disease and from healthy individuals, and collected in the same year and in the same geographical area (Conservation of fHbp, NadA, and NHBA in carrier and pathogenic isolates of Neisseria meningitidis collected in the Czech Republic in 1993. A. Muzzi et al.. Manuscript in preparation). Finally, Chapter 3 describes the molecular features of the antigens in a panel of bacterial isolates collected over a period of 50 years, and representatives of the epidemiological history of meningococcal disease in the Netherlands (An Analysis of the Sequence Variability of Meningococcal fHbp, NadA and NHBA over a 50-Year Period in the Netherlands. S. Bambini et al.. 2013. PloS one e65043).
Resumo:
Systems Biology is an innovative way of doing biology recently raised in bio-informatics contexts, characterised by the study of biological systems as complex systems with a strong focus on the system level and on the interaction dimension. In other words, the objective is to understand biological systems as a whole, putting on the foreground not only the study of the individual parts as standalone parts, but also of their interaction and of the global properties that emerge at the system level by means of the interaction among the parts. This thesis focuses on the adoption of multi-agent systems (MAS) as a suitable paradigm for Systems Biology, for developing models and simulation of complex biological systems. Multi-agent system have been recently introduced in informatics context as a suitabe paradigm for modelling and engineering complex systems. Roughly speaking, a MAS can be conceived as a set of autonomous and interacting entities, called agents, situated in some kind of nvironment, where they fruitfully interact and coordinate so as to obtain a coherent global system behaviour. The claim of this work is that the general properties of MAS make them an effective approach for modelling and building simulations of complex biological systems, following the methodological principles identified by Systems Biology. In particular, the thesis focuses on cell populations as biological systems. In order to support the claim, the thesis introduces and describes (i) a MAS-based model conceived for modelling the dynamics of systems of cells interacting inside cell environment called niches. (ii) a computational tool, developed for implementing the models and executing the simulations. The tool is meant to work as a kind of virtual laboratory, on top of which kinds of virtual experiments can be performed, characterised by the definition and execution of specific models implemented as MASs, so as to support the validation, falsification and improvement of the models through the observation and analysis of the simulations. A hematopoietic stem cell system is taken as reference case study for formulating a specific model and executing virtual experiments.