909 resultados para Concept-based Terminology
Resumo:
The resilience of a social-ecological system is measured by its ability to retain core functionality when subjected to perturbation. Resilience is contextually dependent on the state of system components, the complex interactions among these components, and the timing, location, and magnitude of perturbations. The stability landscape concept provides a useful framework for considering resilience within the specified context of a particular social-ecological system but has proven difficult to operationalize. This difficulty stems largely from the complex, multidimensional nature of the systems of interest and uncertainty in system response. Agent-based models are an effective methodology for understanding how cross-scale processes within and across social and ecological domains contribute to overall system resilience. We present the results of a stylized model of agricultural land use in a small watershed that is typical of the Midwestern United States. The spatially explicit model couples land use, biophysical models, and economic drivers with an agent-based model to explore the effects of perturbations and policy adaptations on system outcomes. By applying the coupled modeling approach within the resilience and stability landscape frameworks, we (1) estimate the sensitivity of the system to context-specific perturbations, (2) determine potential outcomes of those perturbations, (3) identify possible alternative states within state space, (4) evaluate the resilience of system states, and (5) characterize changes in system-scale resilience brought on by changes in individual land use decisions.
Resumo:
This paper aims to investigate the ways in which context-based sonic art is capable of furthering a knowledge and understanding of place based on the initial perceptual encounter. How might this perceptual encounter operate in terms of a sound work’s affective dimension? To explore these issues I draw upon James J. Gibson’s ecological theory of perception and Gernot Böhme’s concept of an ‘aesthetic of atmospheres’. Within the ecological model of perception an individual can be regarded as a ‘perceptual system’: a mobile organism that seeks information from a coherent environment. I relate this concept to notions of the spatial address of environmental sound work in order to explore (a) how the human perceptual apparatus relates to the sonic environment in its mediated form and (b) how this impacts on individuals’ ability to experience such work as complex sonic ‘environments’. Can the ecological theory of perception aid the understanding of how the listener engages with context-based work? In proposing answers to this question, this paper advances a coherent analytical framework that may lead us to a more systematic grasp of the ways in which individuals engage aesthetically with sonic space and environment. I illustrate this methodology through an examination of some of the recorded work of sound artist Chris Watson.
Resumo:
The estimating of the relative orientation and position of a camera is one of the integral topics in the field of computer vision. The accuracy of a certain Finnish technology company’s traffic sign inventory and localization process can be improved by utilizing the aforementioned concept. The company’s localization process uses video data produced by a vehicle installed camera. The accuracy of estimated traffic sign locations depends on the relative orientation between the camera and the vehicle. This thesis proposes a computer vision based software solution which can estimate a camera’s orientation relative to the movement direction of the vehicle by utilizing video data. The task was solved by using feature-based methods and open source software. When using simulated data sets, the camera orientation estimates had an absolute error of 0.31 degrees on average. The software solution can be integrated to be a part of the traffic sign localization pipeline of the company in question.
Resumo:
With the development of electronic devices, more and more mobile clients are connected to the Internet and they generate massive data every day. We live in an age of “Big Data”, and every day we generate hundreds of million magnitude data. By analyzing the data and making prediction, we can carry out better development plan. Unfortunately, traditional computation framework cannot meet the demand, so the Hadoop would be put forward. First the paper introduces the background and development status of Hadoop, compares the MapReduce in Hadoop 1.0 and YARN in Hadoop 2.0, and analyzes the advantages and disadvantages of them. Because the resource management module is the core role of YARN, so next the paper would research about the resource allocation module including the resource management, resource allocation algorithm, resource preemption model and the whole resource scheduling process from applying resource to finishing allocation. Also it would introduce the FIFO Scheduler, Capacity Scheduler, and Fair Scheduler and compare them. The main work has been done in this paper is researching and analyzing the Dominant Resource Fair algorithm of YARN, putting forward a maximum resource utilization algorithm based on Dominant Resource Fair algorithm. The paper also provides a suggestion to improve the unreasonable facts in resource preemption model. Emphasizing “fairness” during resource allocation is the core concept of Dominant Resource Fair algorithm of YARM. Because the cluster is multiple users and multiple resources, so the user’s resource request is multiple too. The DRF algorithm would divide the user’s resources into dominant resource and normal resource. For a user, the dominant resource is the one whose share is highest among all the request resources, others are normal resource. The DRF algorithm requires the dominant resource share of each user being equal. But for these cases where different users’ dominant resource amount differs greatly, emphasizing “fairness” is not suitable and can’t promote the resource utilization of the cluster. By analyzing these cases, this thesis puts forward a new allocation algorithm based on DRF. The new algorithm takes the “fairness” into consideration but not the main principle. Maximizing the resource utilization is the main principle and goal of the new algorithm. According to comparing the result of the DRF and new algorithm based on DRF, we found that the new algorithm has more high resource utilization than DRF. The last part of the thesis is to install the environment of YARN and use the Scheduler Load Simulator (SLS) to simulate the cluster environment.
Resumo:
Creative ways of utilising renewable energy sources in electricity generation especially in remote areas and particularly in countries depending on imported energy, while increasing energy security and reducing cost of such isolated off-grid systems, is becoming an urgently needed necessity for the effective strategic planning of Energy Systems. The aim of this research project was to design and implement a new decision support framework for the optimal design of hybrid micro grids considering different types of different technologies, where the design objective is to minimize the total cost of the hybrid micro grid while at the same time satisfying the required electric demand. Results of a comprehensive literature review, of existing analytical, decision support tools and literature on HPS, has identified the gaps and the necessary conceptual parts of an analytical decision support framework. As a result this research proposes and reports an Iterative Analytical Design Framework (IADF) and its implementation for the optimal design of an Off-grid renewable energy based hybrid smart micro-grid (OGREH-SμG) with intra and inter-grid (μG2μG & μG2G) synchronization capabilities and a novel storage technique. The modelling design and simulations were based on simulations conducted using HOMER Energy and MatLab/SIMULINK, Energy Planning and Design software platforms. The design, experimental proof of concept, verification and simulation of a new storage concept incorporating Hydrogen Peroxide (H2O2) fuel cell is also reported. The implementation of the smart components consisting Raspberry Pi that is devised and programmed for the semi-smart energy management framework (a novel control strategy, including synchronization capabilities) of the OGREH-SμG are also detailed and reported. The hybrid μG was designed and implemented as a case study for the Bayir/Jordan area. This research has provided an alternative decision support tool to solve Renewable Energy Integration for the optimal number, type and size of components to configure the hybrid μG. In addition this research has formulated and reported a linear cost function to mathematically verify computer based simulations and fine tune the solutions in the iterative framework and concluded that such solutions converge to a correct optimal approximation when considering the properties of the problem. As a result of this investigation it has been demonstrated that, the implemented and reported OGREH-SμG design incorporates wind and sun powered generation complemented with batteries, two fuel cell units and a diesel generator is a unique approach to Utilizing indigenous renewable energy with a capability of being able to synchronize with other μ-grids is the most effective and optimal way of electrifying developing countries with fewer resources in a sustainable way, with minimum impact on the environment while also achieving reductions in GHG. The dissertation concludes with suggested extensions to this work in the future.
Resumo:
Purpose – Graffiti, both ancient and contemporary, could be argued to be significant and therefore worthy of protection. Attaching value is, however, subjective with no specific method being solely utilised for evaluating these items. The purpose of this paper to help those who are attempting to evaluate the merit of graffiti to do so, by determining “cultural significance”, which is a widely adopted concept for attaching value to the historic built environment. The current Scottish system utilised to assess “cultural significance” is the Scottish Historic Environment Policy (SHEP) which shares many common features with other determinants of cultural significance in different countries. The SHEP document, as with other systems, could however be criticised for being insufficiently sensitive to enable the evaluation of historic graffiti due, in part, to the subjective nature of determination of aesthetic value. Design/methodology/approach – A review of literature is followed by consideration of case studies taken from a variety of historical and geographical contexts. The majority of examples of graffiti included in this paper have been selected for their relative high profile, previous academic study, and breadth of geographic spread. This selection will hopefully enable a relatively comprehensive, rational assessment to be undertaken. That being said, one example has been integrated to reflect commonly occurring graffiti that is typical to all of the built environment. Findings – The determination of aesthetic value is particularly problematic for the evaluator and the use of additional art‐based mechanisms such as “significant form”, “self expression” and “meaning” may aid this process. Regrettably, these determinants are also in themselves subjective, enhancing complexity of evaluation. Almost all graffiti could be said to have artistic merit, using the aforementioned determinants. However, whether it is “good” art is an all together different question. The evaluation of “good” art and graffiti would have traditionally been evaluated by experts. Today, determination of graffiti should be evaluated and value attached by broader society, community groups, and experts alike. Originality/value – This research will assist those responsible for historic building conservation with the evaluation of whether graffiti is worthy of conservation.
Resumo:
Bikeshares promote healthy lifestyles and sustainability among commuters, casual riders, and tourists. However, the central pillar of modern systems, the bike station, cannot be easily integrated into a compact college campus. Fixed stations lack the flexibility to meet the needs of college students who make quick, short-distance trips. Additionally, the necessary cost of implementing and maintaining each station prohibits increasing the number of stations for user convenience. Therefore, the team developed a stationless bikeshare based on a smartlock permanently attached to bicycles in the system. The smartlock system design incorporates several innovative approaches to provide usability, security, and reliability that overcome the limitations of a station centered design. A focus group discussion allowed the team to receive feedback on the early lock, system, and website designs, identify improvements and craft a pleasant user experience. The team designed a unique, two-step lock system that is intuitive to operate while mitigating user error. To ensure security, user access is limited through near field ii communications (NFC) technology connected to a mechatronic release system. The said system relied on a NFC module and a servo working through an Arduino microcontroller coded in the Arduino IDE. To track rentals and maintain the system, each bike is fitted with an XBee module to communicate with a scalable ZigBee mesh network. The network allows for bidirectional, real-time communication with a Meteor.js web application, which enables user and administrator functions through an intuitive user interface available on mobile and desktop. The development of an independent smartlock to replace bike stations is essential to meet the needs of the modern college student. With the goal of creating a bikeshare that better serves college students, Team BIKES has laid the framework for a system that is affordable, easily adaptable, and implementable on any university expressing an interest in bringing a bikeshare to its campus.
Resumo:
In this thesis, proactive marketing is suggested to be a broader concept than existing research assumes. Although the concept has been mentioned in the context of competitive advantage in previous research, it has not been comprehensively described. This thesis shows that proactive marketing is more than investing in marketing communications of a company. Proactive marketing is described as a three-phased process that contains different customer value identification, creation, and delivery activities. The purpose of proactive marketing is essentially to anticipate and pursue market opportunities that bring value to the company’s stakeholders. Ultimately, proactive marketing aims at acting first on the market, shaping the markets, and thus reaching competitive advantage. The proactive marketing process is supported by the structures of an organization. Suitable structures for proactive marketing are identified in the thesis based on existing research and through an empirical analysis. Moreover, proactive marketing is related to two management theories: the dynamic capabilities framework and the empowerment of employees. A dynamic environment requires companies that pursue proactive marketing to change continuously. Dynamic capabilities are considered as tools of the management, which enable companies to create suitable conditions for the constant change. Empowerment of employees is a management practice that creates proactive behaviors in individuals. The empirical analysis is conducted in an online company operating in the rapidly changing marketplace of the Internet. Through the empirical analysis, the thesis identifies in practice how proactiveness manifests in the marketing process of a company, how organizational structures facilitate proactive marketing, and how proactive marketing is managed. The theoretical contribution of this thesis consist of defining the proactive marketing concept comprehensively and providing further research suggestions related to proactive marketing.
Resumo:
The wide use of antibiotics in aquaculture has led to the emergence of resistant microbial species. It should be avoided/minimized by controlling the amount of drug employed in fish farming. For this purpose, the present work proposes test-strip papers aiming at the detection/semi-quantitative determination of organic drugs by visual comparison of color changes, in a similar analytical procedure to that of pH monitoring by universal pH paper. This is done by establishing suitable chemical changes upon cellulose, attributing the paper the ability to react with the organic drug and to produce a color change. Quantitative data is also enabled by taking a picture and applying a suitable mathematical treatment to the color coordinates given by the HSL system used by windows. As proof of concept, this approach was applied to oxytetracycline (OXY), one of the antibiotics frequently used in aquaculture. A bottom-up modification of paper was established, starting by the reaction of the glucose moieties on the paper with 3-triethoxysilylpropylamine (APTES). The so-formed amine layer allowed binding to a metal ion by coordination chemistry, while the metal ion reacted after with the drug to produce a colored compound. The most suitable metals to carry out such modification were selected by bulk studies, and the several stages of the paper modification were optimized to produce an intense color change against the concentration of the drug. The paper strips were applied to the analysis of spiked environmental water, allowing a quantitative determination for OXY concentrations as low as 30 ng/mL. In general, this work provided a simple, method to screen and discriminate tetracycline drugs, in aquaculture, being a promising tool for local, quick and cheap monitoring of drugs.
Resumo:
Historically, the health risk of mycotoxins had been evaluated on the basis of single-chemical and single-exposure pathway scenarios. However, the co-contamination of foodstuffs with these compounds is being reported at an increasing rate and a multiple-exposure scenario for humans and vulnerable population groups as children is urgently needed. Cereals are among the first solid foods eaten by child and thus constitute an important food group of their diet. Few data are available relatively to early stages child´s exposure to mycotoxins through consumption of cereal-based foods. The present study aims to perform the cumulative risk assessment of mycotoxins present in a set of cereal-based foods including breakfast cereals (BC), processed cereal-based foods (PCBF) and biscuits (BT), consumed by children (1 to 3 years old, n=75) from Lisbon region, Portugal. Children food consumption and occurrence of 12 mycotoxins (aflatoxins, ochratoxin A, fumonisins and trichothecenes) in cereal-based foods were combined to estimate the mycotoxin daily intake, using deterministic and probabilistic approaches. Different strategies were used to treat the left censored data. For aflatoxins, as carcinogenic compounds, the margin of exposure (MoE) was calculated as a ratio of BMDL (benchmark dose lower confidence limit) and aflatoxin daily exposure. For the remaining mycotoxins, the output of exposure was compared to the dose reference values (TDI) in order to calculate the hazard quotients (HQ, ratio between exposure and a reference dose). The concentration addition (CA) concept was used for the cumulative risk assessment of multiple mycotoxins. The combined margin of exposure (MoET) and the hazard index (HI) were calculated for aflatoxins and the remaining mycotoxins, respectively. Main results revealed a significant health concern related to aflatoxins and especially aflatoxin M1 exposure according to the MoET and MoE values (below 10000), respectively. HQ and HI values for the remaining mycotoxins were below 1, revealing a low concern from a public health point of view. These are the first results on cumulative risk assessment of multiple mycotoxins present in cereal-based foods consumed by children. Considering the present results, more research studies are needed to provide the governmental regulatory bodies with data to develop an approach that contemplate the human exposure and, particularly, children, to multiple mycotoxins in food. The last issue is particularly important considering the potential synergistic effects that could occur between mycotoxins and its potential impact on human and, mainly, children health.
Resumo:
People, animals and the environment can be exposed to multiple chemicals at once from a variety of sources, but current risk assessment is usually carried out based on one chemical substance at a time. In human health risk assessment, ingestion of food is considered a major route of exposure to many contaminants, namely mycotoxins, a wide group of fungal secondary metabolites that are known to potentially cause toxicity and carcinogenic outcomes. Mycotoxins are commonly found in a variety of foods including those intended for consumption by infants and young children and have been found in processed cereal-based foods available in the Portuguese market. The use of mathematical models, including probabilistic approaches using Monte Carlo simulations, constitutes a prominent issue in human health risk assessment in general and in mycotoxins exposure assessment in particular. The present study aims to characterize, for the first time, the risk associated with the exposure of Portuguese children to single and multiple mycotoxins present in processed cereal-based foods (CBF). Portuguese children (0-3 years old) food consumption data (n=103) were collected using a 3 days food diary. Contamination data concerned the quantification of 12 mycotoxins (aflatoxins, ochratoxin A, fumonisins and trichothecenes) were evaluated in 20 CBF samples marketed in 2014 and 2015 in Lisbon; samples were analyzed by HPLC-FLD, LC-MS/MS and GC-MS. Daily exposure of children to mycotoxins was performed using deterministic and probabilistic approaches. Different strategies were used to treat the left censored data. For aflatoxins, as carcinogenic compounds, the margin of exposure (MoE) was calculated as a ratio of BMDL (benchmark dose lower confidence limit) to the aflatoxin exposure. The magnitude of the MoE gives an indication of the risk level. For the remaining mycotoxins, the output of exposure was compared to the dose reference values (TDI) in order to calculate the hazard quotients (ratio between exposure and a reference dose, HQ). For the cumulative risk assessment of multiple mycotoxins, the concentration addition (CA) concept was used. The combined margin of exposure (MoET) and the hazard index (HI) were calculated for aflatoxins and the remaining mycotoxins, respectively. 71% of CBF analyzed samples were contaminated with mycotoxins (with values below the legal limits) and approximately 56% of the studied children consumed CBF at least once in these 3 days. Preliminary results showed that children exposure to single mycotoxins present in CBF were below the TDI. Aflatoxins MoE and MoET revealed a reduced potential risk by exposure through consumption of CBF (with values around 10000 or more). HQ and HI values for the remaining mycotoxins were below 1. Children are a particularly vulnerable population group to food contaminants and the present results point out an urgent need to establish legal limits and control strategies regarding the presence of multiple mycotoxins in children foods in order to protect their health. The development of packaging materials with antifungal properties is a possible solution to control the growth of moulds and consequently to reduce mycotoxin production, contributing to guarantee the quality and safety of foods intended for children consumption.
Resumo:
Reliability and dependability modeling can be employed during many stages of analysis of a computing system to gain insights into its critical behaviors. To provide useful results, realistic models of systems are often necessarily large and complex. Numerical analysis of these models presents a formidable challenge because the sizes of their state-space descriptions grow exponentially in proportion to the sizes of the models. On the other hand, simulation of the models requires analysis of many trajectories in order to compute statistically correct solutions. This dissertation presents a novel framework for performing both numerical analysis and simulation. The new numerical approach computes bounds on the solutions of transient measures in large continuous-time Markov chains (CTMCs). It extends existing path-based and uniformization-based methods by identifying sets of paths that are equivalent with respect to a reward measure and related to one another via a simple structural relationship. This relationship makes it possible for the approach to explore multiple paths at the same time,· thus significantly increasing the number of paths that can be explored in a given amount of time. Furthermore, the use of a structured representation for the state space and the direct computation of the desired reward measure (without ever storing the solution vector) allow it to analyze very large models using a very small amount of storage. Often, path-based techniques must compute many paths to obtain tight bounds. In addition to presenting the basic path-based approach, we also present algorithms for computing more paths and tighter bounds quickly. One resulting approach is based on the concept of path composition whereby precomputed subpaths are composed to compute the whole paths efficiently. Another approach is based on selecting important paths (among a set of many paths) for evaluation. Many path-based techniques suffer from having to evaluate many (unimportant) paths. Evaluating the important ones helps to compute tight bounds efficiently and quickly.
Resumo:
The barriers that people with disabilities face around the world are not only inherent to the limitations resulting from the disability itself, but, more importantly, these barriers rest with the societal technologies of exclusion. Using a mixed methodology approach, I conduct a quest to revealing several societal factors that limit full participation of people with disabilities in their communities, which will contribute to understanding and developing a more comprehensive framework for full inclusion of people with disabilities into the society. First, I conduct a multiple regression analysis to seek whether there is a statistical relationship between the national level of development, the level of democratization, and the level of education within a country’s population on one hand, and expressed concern for and preparedness to improve the quality of life for people of disabilities on another hand. The results from the quantitative methodology reveal that people without disabilities are more prepared to take care of people with disabilities when the level of development of the country is higher, when the people have more freedom of expression and hold the government accountable for its actions, and when the level of corruption is under control. However, a greater concern for the well-being of people with disabilities is correlated with a high level of country development, a decreased value of political stability and absence of violence, a decreased level of government effectiveness, and a greater level of law enforcement. None of the dependent variables are significantly correlated with the level of education from a given country. Then, I delve into an interpretive analysis to understand multiple factors that contribute to the construction of attitudes and practices towards people with disabilities. In doing this, I build upon the four main principles outlined by the United Nations as strongly recommended to be embedded in all international programmes: (1) identification of claims of human rights and the corresponding obligations of governments, hence, I assess and analyze disability rights in education, looking at United Nation, United States, and European Union Perspectives Educational Rights Provisions for People with Disabilities (Ch. 3); (2) estimated capacity of individuals to claim their rights and of governments to fulfill their obligations, hence, I look at the people with disabilities as rights-holders and duty-bearers and discuss the importance of investing in special capital in the context of global development (Ch. 4); (3) programmes monitor and evaluate the outcomes and the processes under the auspices of human rights standards, hence, I look at the importance of evaluating the UN World Programme of Action Concerning People with Disabilities from multiple perspectives, as an example of why and how to monitor and evaluate educational human rights outcomes and processes (Ch. 5); and (4) programming should reflect the recommendations of international human rights bodies and mechanisms, hence, I focus on programming that fosters development of the capacity of people with disabilities, that is, planning for an ecology of disabilities and ecoducation for people with disabilities (Ch. 6). Results from both methodologies converge to a certain point, and they further complement each other. One common result for the two methodologies employed is that disability is an evolving concept when viewed in a broader context, which integrates the four spaces that the ecological framework incorporates. Another common result is that factors such as economic, social, legal, political, and natural resources and contexts contribute to the health, education and employment opportunities, and to the overall well-being of people with disabilities. The ecological framework sees all these factors from a meta-systemic perspective, where bi-directional interactions are expected and desired, and also from a human rights point of view, where the inherent value of people is upheld at its highest standard.
Resumo:
Database schemas, in many organizations, are considered one of the critical assets to be protected. From database schemas, it is not only possible to infer the information being collected but also the way organizations manage their businesses and/or activities. One of the ways to disclose database schemas is through the Create, Read, Update and Delete (CRUD) expressions. In fact, their use can follow strict security rules or be unregulated by malicious users. In the first case, users are required to master database schemas. This can be critical when applications that access the database directly, which we call database interface applications (DIA), are developed by third party organizations via outsourcing. In the second case, users can disclose partially or totally database schemas following malicious algorithms based on CRUD expressions. To overcome this vulnerability, we propose a new technique where CRUD expressions cannot be directly manipulated by DIAs any more. Whenever a DIA starts-up, the associated database server generates a random codified token for each CRUD expression and sends it to the DIA that the database servers can use to execute the correspondent CRUD expression. In order to validate our proposal, we present a conceptual architectural model and a proof of concept.
Resumo:
Background: There are limited data concerning endoscopist-directed endoscopic retrograde cholangiopancreatography deep sedation. The aim of this study was to establish the safety and risk factors for difficult sedation in daily practice. Patients and methods: Hospital-based, frequency matched case-control study. All patients were identified from a database of 1,008 patients between 2014 and 2015. The cases were those with difficult sedations. This concept was defined based on the combination of the receipt of high-doses of midazolam or propofol, poor tolerance, use of reversal agents or sedation-related adverse events. The presence of different factors was evaluated to determine whether they predicted difficult sedation. Results: One-hundred and eighty-nine patients (63 cases, 126 controls) were included. Cases were classified in terms of high-dose requirements (n = 35, 55.56%), sedation-related adverse events (n = 14, 22.22%), the use of reversal agents (n = 13, 20.63%) and agitation/discomfort (n = 8, 12.7%). Concerning adverse events, the total rate was 1.39%, including clinically relevant hypoxemia (n = 11), severe hypotension (n = 2) and paradoxical reactions to midazolam (n = 1). The rate of hypoxemia was higher in patients under propofol combined with midazolam than in patients with propofol alone (2.56% vs. 0.8%, p < 0.001). Alcohol consumption (OR: 2.674 [CI 95%: 1.098-6.515], p = 0.030), opioid consumption (OR: 2.713 [CI 95%: 1.096-6.716], p = 0.031) and the consumption of other psychoactive drugs (OR: 2.015 [CI 95%: 1.017-3.991], p = 0.045) were confirmed to be independent risk factors for difficult sedation. Conclusions: Endoscopist-directed deep sedation during endoscopic retrograde cholangiopancreatography is safe. The presence of certain factors should be assessed before the procedure to identify patients who are high-risk for difficult sedation.