965 resultados para well-structured transition systems
Resumo:
Madagascar is renowned for the loss of the forested habitat of lemurs and other species endemic to the island. Less well known is that in the highlands, a region often described as an environmental "basket-case" of fire-degraded, eroded grasslands, woody cover has been increasing for decades. Using information derived from publically available high- and medium-resolution satellites, this study characterizes tree cover dynamics in the highlands of Madagascar over the past two decades. Our results reveal heterogeneous patterns of increased tree cover on smallholder farms and village lands, spurred by a mix of endogenous and exogenous forces. The new trees play important roles in rural livelihoods, providing renewable supplies of firewood, charcoal, timber and other products and services, as well as defensible claims to land tenure in the context of a decline in the use of hillside commons for grazing. This study documents this nascent forest transition through Land Change Science techniques, and provides a prologue to political ecological analysis by setting these changes in their social and environmental context and interrogating the costs and benefits of the shift in rural livelihood strategies.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
The primary objective is to identify the critical factors that have a natural impact on the performance measurement system. It is important to make correct decisions related to measurement systems, which are based on the complex business environment. The performance measurement system is combined with a very complex non-linear factor. The Six Sigma methodology is seen as one potential approach at every organisational level. It will be linked to the performance and financial measurement as well as to the analytical thinking on which the viewpoint of management depends. The complex systems are connected to the customer relationship study. As the primary throughput can be seen in a new well-defined performance measurement structure that will also be facilitated as will an analytical multifactor system. These critical factors should also be seen as a business innovation opportunity at the same time. This master's thesis has been divided into two different theoretical parts. The empirical part consists of both action-oriented and constructive research approaches with an empirical case study. The secondary objective is to seek a competitive advantage factor with a new analytical tool and the Six Sigma thinking. Process and product capabilities will be linked to the contribution of complex system. These critical barriers will be identified by the performance measuring system. The secondary throughput can be recognised as the product and the process cost efficiencies which throughputs are achieved with an advantage of management. The performance measurement potential is related to the different productivity analysis. Productivity can be seen as one essential part of the competitive advantage factor.
Resumo:
Exposing the human bronchial epithelial cell line BEAS-2B to the nitric oxide (NO) donor sodium 1-(N,N-diethylamino)diazen-1-ium-1, 2-diolate (DEA/NO) at an initial concentration of 0.6 mM while generating superoxide ion at the rate of 1 microM/min with the hypoxanthine/xanthine oxidase (HX/XO) system induced C:G-->T:A transition mutations in codon 248 of the p53 gene. This pattern of mutagenicity was not seen by 'fish-restriction fragment length polymorphism/polymerase chain reaction' (fish-RFLP/PCR) on exposure to DEA/NO alone, however, exposure to HX/XO led to various mutations, suggesting that co-generation of NO and superoxide was responsible for inducing the observed point mutation. DEA/NO potentiated the ability of HX/XO to induce lipid peroxidation as well as DNA single- and double-strand breaks under these conditions, while 0.6 mM DEA/NO in the absence of HX/XO had no significant effect on these parameters. The results show that a point mutation seen at high frequency in certain common human tumors can be induced by simultaneous exposure to reactive oxygen species and a NO source.
Resumo:
Different types of aerosolization and deagglomeration testing systems exist for studying the properties of nanomaterial powders and their aerosols. However, results are dependent on the specific methods used. In order to have well-characterized aerosols, we require a better understanding of how system parameters and testing conditions influence the properties of the aerosols generated. In the present study, four experimental setups delivering different aerosolization energies were used to test the resultant aerosols of two distinct nanomaterials (hydrophobic and hydrophilic TiO2). The reproducibility of results within each system was good. However, the number concentrations and size distributions of the aerosols created varied across the four systems; for number concentrations, e.g., from 10(3) to 10(6) #/cm(3). Moreover, distinct differences were also observed between the two materials with different surface coatings. The article discusses how system characteristics and other pertinent conditions modify the test results. We propose using air velocity as a suitable proxy for estimating energy input levels in aerosolization systems. The information derived from this work will be especially useful for establishing standard operating procedures for testing nanopowders, as well as for estimating their release rates under different energy input conditions, which is relevant for occupational exposure.
Resumo:
What drove the transition from small-scale human societies centred on kinship and personal exchange, to large-scale societies comprising cooperation and division of labour among untold numbers of unrelated individuals? We propose that the unique human capacity to negotiate institutional rules that coordinate social actions was a key driver of this transition. By creating institutions, humans have been able to move from the default 'Hobbesian' rules of the 'game of life', determined by physical/environmental constraints, into self-created rules of social organization where cooperation can be individually advantageous even in large groups of unrelated individuals. Examples include rules of food sharing in hunter-gatherers, rules for the usage of irrigation systems in agriculturalists, property rights and systems for sharing reputation between mediaeval traders. Successful institutions create rules of interaction that are self-enforcing, providing direct benefits both to individuals that follow them, and to individuals that sanction rule breakers. Forming institutions requires shared intentionality, language and other cognitive abilities largely absent in other primates. We explain how cooperative breeding likely selected for these abilities early in the Homo lineage. This allowed anatomically modern humans to create institutions that transformed the self-reliance of our primate ancestors into the division of labour of large-scale human social organization.
Resumo:
The availability of stem cells is of great promise to study early developmental stages and to generate adequate cells for cell transfer therapies. Although many researchers using stem cells were successful in dissecting intrinsic and extrinsic mechanisms and in generating specific cell phenotypes, few of the stem cells or the differentiated cells show the capacity to repair a tissue. Advances in cell and stem cell cultivation during the last years made tremendous progress in the generation of bona fide differentiated cells able to integrate into a tissue after transplantation, opening new perspectives for developmental biology studies and for regenerative medicine. In this review, we focus on the main works attempting to create in vitro conditions mimicking the natural environment of CNS structures such as the neural tube and its development in different brain region areas including the optic cup. The use of protocols growing cells in 3D organoids is a key strategy to produce cells resembling endogenous ones. An emphasis on the generation of retina tissue and photoreceptor cells is provided to highlight the promising developments in this field. Other examples are presented and discussed, such as the formation of cortical tissue, the epithelial gut or the kidney organoids. The generation of differentiated tissues and well-defined cell phenotypes from embryonic stem (ES) cells or induced pluripotent cells (iPSCs) opens several new strategies in the field of biology and regenerative medicine. A 3D organ/tissue development in vitro derived from human cells brings a unique tool to study human cell biology and pathophysiology of an organ or a specific cell population. The perspective of tissue repair is discussed as well as the necessity of cell banking to accelerate the progress of this promising field.
Resumo:
Since several years, the health of adolescents is on the agenda of ministers, decision makers and health professionals. Around the world, while there has been a steady decrease of the death rates among young children, this is not the case for young people. This is mainly linked with the fact that mortality and morbidity during this period of life is largely linked with non communicable diseases and conditions, including deaths from injuries, suicide, homicides and drug abuse. Unplanned pregnancies, illegal abortions, newly acquired HIV infections are also situations that have short and long term consequences. This paper reviews the epidemiological data pertaining to adolescent health and disease. It proposes evidence-informed avenues as how to address these issues in the field of health care (e.g. adolescent friendly services) and of prevention and health promotion. It also stresses the importance of creating safe environments for the development and well-being of young people and thus, of an interdisciplinary and inter sectorial approach to their complex health problems and challenges.
Resumo:
Early warning systems (EWSs) rely on the capacity to forecast a dangerous event with a certain amount of advance by defining warning criteria on which the safety of the population will depend. Monitoring of landslides is facilitated by new technologies, decreasing prices and easier data processing. At the same time, predicting the onset of a rapid failure or the sudden transition from slow to rapid failure and subsequent collapse, and its consequences is challenging for scientists that must deal with uncertainties and have limited tools to do so. Furthermore, EWS and warning criteria are becoming more and more a subject of concern between technical experts, researchers, stakeholders and decision makers responsible for the activation, enforcement and approval of civil protection actions. EWSs imply also a sharing of responsibilities which is often averted by technical staff, managers of technical offices and governing institutions. We organized the First International Workshop on Warning Criteria for Active Slides (IWWCAS) to promote sharing and networking among members from specialized institutions and relevant experts of EWS. In this paper, we summarize the event to stimulate discussion and collaboration between organizations dealing with the complex task of managing hazard and risk related to active slides.
Resumo:
It is well known that the Neolithic transition spread across Europe at a speed of about 1 km/yr. This result has been previously interpreted as a range expansion of the Neolithic driven mainly by demic diffusion (whereas cultural diffusion played a secondary role). However, a long-standing problem is whether this value (1 km/yr) and its interpretation (mainly demic diffusion) are characteristic only of Europe or universal (i.e. intrinsic features of Neolithic transitions all over the world). So far Neolithic spread rates outside Europe have been barely measured, and Neolithic spread rates substantially faster than 1 km/yr have not been previously reported. Here we show that the transition from hunting and gathering into herding in southern Africa spread at a rate of about 2.4 km/yr, i.e. about twice faster than the European Neolithic transition. Thus the value 1 km/yr is not a universal feature of Neolithic transitions in the world. Resorting to a recent demic-cultural wave-of-advance model, we also find that the main mechanism at work in the southern African Neolithic spread was cultural diffusion (whereas demic diffusion played a secondary role). This is in sharp contrast to the European Neolithic. Our results further suggest that Neolithic spread rates could be mainly driven by cultural diffusion in cases where the final state of this transition is herding/pastoralism (such as in southern Africa) rather than farming and stockbreeding (as in Europe)
Resumo:
Tämän diplomityön tavoitteena oli tutustua kokonaisvaltaisen tuottavan kunnossapidon teoriaan ja soveltaa sitä Abloy Oy Joensuun tehtaalla pilottikohteeksi valittuun koneeseen. Pilottikohteeksi oli valittu uusi Sveitsiläinen Hydromat-monitoimityöstökeskus, jonka kokonaistehokkuutta (OEE) pyrittiin parantamaan kokonaisvaltaisella tuottavalla kunnossapidolla (TPM). Työn kirjallisuusosuudessa tarkastellaan teoriatietoja, jotka liittyvät yleisesti kunnossapitoon ja tarkemmin TPM-toimintaan. Lisäksi käsitellään kunnossapidon mittareita ja tietojärjestelmiä. Työn empiirisessä osassa selvitetään, kuinka TPM-toiminta otettiin käyttöön Hydromat-monitoimityöstökeskuksen osalta. Eniten resursseja käytettiin koneen asetusaikojen lyhentämiseen. Lisäksi koneen käyttäjien itsenäiseen kunnossapitoon siirtyminen kuului työn sisältöön. Työssä on selvitetty myös uuden tuotannon seurantataulun käyttöönotto. Seurantataulusta nähdään useita tuotantotietoja kuten kokonaistehokkuus. Työn merkittävimmät tulokset osoittavat, että asetusajat ovat tällä hekellä hyvin pitkiä tavoitteisiin nähden. Asetusaikojen lyhentyminen halutulle tasolle vaatii koneen käyttäjien pitkän oppimisprosessin sekä optimaaliset toimintaolosuhteet. Lisäksi havaittiin, että uuden toimintamallin esitteleminen ja sen käyttöönotto valmistavassa teollisuudessa on vaativa prosessi.
Resumo:
The physical-chemical process of swelling in water-based gel of natural polymers is investigated with the purpose of applying these systems to biomedical materials for controlled release of drugs. In this work we develop a study about the sol-gel transition of solutions of chitosan in the presence of formaldehyde and glutaraldehyde like crosslinking agents and we have determined the effect of many aditives in the time of gelification from the elaborated sistems. The phisical-chemistry process of swelling of the formed gels was evaluated in function of the degree of crosslinking of the incorporated aditives and the pH. Gelling times of chitosan solutions were obtained using viscosimetric measurement, in the pre-gel state, as well as condutivity ones.The results obtained suggest that component concentration modifies the kinetic profile of the transition and the swelling behavior. Regarding H+ content, the gels were highly susceptible to swelling in acidic conditions, which characterize this system as pH - sensitive.
Resumo:
As a result of the growing interest in studying employee well-being as a complex process that portrays high levels of within-individual variability and evolves over time, this present study considers the experience of flow in the workplace from a nonlinear dynamical systems approach. Our goal is to offer new ways to move the study of employee well-being beyond linear approaches. With nonlinear dynamical systems theory as the backdrop, we conducted a longitudinal study using the experience sampling method and qualitative semi-structured interviews for data collection; 6981 registers of data were collected from a sample of 60 employees. The obtained time series were analyzed using various techniques derived from the nonlinear dynamical systems theory (i.e., recurrence analysis and surrogate data) and multiple correspondence analyses. The results revealed the following: 1) flow in the workplace presents a high degree of within-individual variability; this variability is characterized as chaotic for most of the cases (75%); 2) high levels of flow are associated with chaos; and 3) different dimensions of the flow experience (e.g., merging of action and awareness) as well as individual (e.g., age) and job characteristics (e.g., job tenure) are associated with the emergence of different dynamic patterns (chaotic, linear and random).
Resumo:
The performance of Grid connected Photovoltaic System working with DCBoost stage is investigated. The DC-Boost Converter topology is deduced from three phase half controlled bridge and controlled by Sliding Mode Control. Due to the fact that Grid connected Photovoltaic System includes Solar cells as a DC source and inverter for grid connection, those are under the scope of this research as well. The advantages of using MPPT are analyzed. The system is simulated in Matlab-Simulink™ environment.
Resumo:
The design methods and languages targeted to modern System-on-Chip designs are facing tremendous pressure of the ever-increasing complexity, power, and speed requirements. To estimate any of these three metrics, there is a trade-off between accuracy and abstraction level of detail in which a system under design is analyzed. The more detailed the description, the more accurate the simulation will be, but, on the other hand, the more time consuming it will be. Moreover, a designer wants to make decisions as early as possible in the design flow to avoid costly design backtracking. To answer the challenges posed upon System-on-chip designs, this thesis introduces a formal, power aware framework, its development methods, and methods to constraint and analyze power consumption of the system under design. This thesis discusses on power analysis of synchronous and asynchronous systems not forgetting the communication aspects of these systems. The presented framework is built upon the Timed Action System formalism, which offer an environment to analyze and constraint the functional and temporal behavior of the system at high abstraction level. Furthermore, due to the complexity of System-on-Chip designs, the possibility to abstract unnecessary implementation details at higher abstraction levels is an essential part of the introduced design framework. With the encapsulation and abstraction techniques incorporated with the procedure based communication allows a designer to use the presented power aware framework in modeling these large scale systems. The introduced techniques also enable one to subdivide the development of communication and computation into own tasks. This property is taken into account in the power analysis part as well. Furthermore, the presented framework is developed in a way that it can be used throughout the design project. In other words, a designer is able to model and analyze systems from an abstract specification down to an implementable specification.