857 resultados para complexity of agents
Resumo:
This paper presents empirical research comparing the accounting difficulties that arise from the use of two valuation methods for biological assets, fair value (FV) and historical cost (HC) accounting, in the agricultural sector. It also compares how reliable each valuation method is in the decision-making process of agents within the sector. By conducting an experiment with students, farmers, and accountants operating in the agricultural sector, we find that they have more difficulties, make larger miscalculations and make poorer judgements with HC accounting than with FV accounting. In-depth interviews uncover flawed accounting practices in the agricultural sector in Spain in order to meet HC accounting requirements. Given the complexities of cost calculation for biological assets and the predominance of small family business units in advanced Western countries, the study concludes that accounting can be more easily applied in the agricultural sector under FV than HC accounting, and that HC conveys a less accurate grasp of the real situation of a farm.
Resumo:
Antioxidant enzymes are involved in important processes of cell detoxification during oxidative stress and have, therefore, been used as biomarkers in algae. Nevertheless, their limited use in fluvial biofilms may be due to the complexity of such communities. Here, a comparison between different extraction methods was performed to obtain a reliable method for catalase extraction from fluvial biofilms. Homogenization followed by glass bead disruption appeared to be the best compromise for catalase extraction. This method was then applied to a field study in a metal-polluted stream (Riou Mort, France). The most polluted sites were characterized by a catalase activity 4–6 times lower than in the low-polluted site. Results of the comparison process and its application are promising for the use of catalase activity as an early warning biomarker of toxicity using biofilms in the laboratory and in the field
Resumo:
BACKGROUND: In the context of population aging, multimorbidity has emerged as a growing concern in public health. However, little is known about multimorbidity patterns and other issues surrounding chronic diseases. The aim of our study was to examine multimorbidity patterns, the relationship between physical and mental conditions and the distribution of multimorbidity in the Spanish adult population. METHODS: Data from this cross-sectional study was collected from the COURAGE study. A total of 4,583 participants from Spain were included, 3,625 aged over 50. An exploratory factor analysis was conducted to detect multimorbidity patterns in the population over 50 years of age. Crude and adjusted binary logistic regressions were performed to identify individual associations between physical and mental conditions. RESULTS: THREE MULTIMORBIDITY PATTERNS ROSE: 'cardio-respiratory' (angina, asthma, chronic lung disease), 'mental-arthritis' (arthritis, depression, anxiety) and the 'aggregated pattern' (angina, hypertension, stroke, diabetes, cataracts, edentulism, arthritis). After adjusting for covariates, asthma, chronic lung disease, arthritis and the number of physical conditions were associated with depression. Angina and the number of physical conditions were associated with a higher risk of anxiety. With regard to multimorbidity distribution, women over 65 years suffered from the highest rate of multimorbidity (67.3%). CONCLUSION: Multimorbidity prevalence occurs in a high percentage of the Spanish population, especially in the elderly. There are specific multimorbidity patterns and individual associations between physical and mental conditions, which bring new insights into the complexity of chronic patients. There is need to implement patient-centered care which involves these interactions rather than merely paying attention to individual diseases.
Resumo:
In the field of observational methodology the observer is obviously a central figure, and close attention should be paid to the process through which he or she acquires, applies, and maintains the skills required. Basic training in how to apply the operational definitions of categories and the rules for coding, coupled with the opportunity to use the observation instrument in real-life situations, can have a positive effect in terms of the degree of agreement achieved when one evaluates intra- and inter-observer reliability. Several authors, including Arias, Argudo, & Alonso (2009) and Medina and Delgado (1999), have put forward proposals for the process of basic and applied training in this context. Reid y De Master (1982) focuses on the observer's performance and how to maintain the acquired skills, it being argued that periodic checks are needed after initial training because an observer may, over time, become less reliable due to the inherent complexity of category systems. The purpose of this subsequent training is to maintain acceptable levels of observer reliability. Various strategies can be used to this end, including providing feedback about those categories associated with a good reliability index, or offering re-training in how to apply those that yield lower indices. The aim of this study is to develop a performance-based index that is capable of assessing an observer's ability to produce reliable observations in conjunction with other observers.
Resumo:
We have developed an easy method for the synthesis of thirteen compounds derived from 1,2,4-triazoles through a carboxylic acid and hydrazinophthalazine reaction, with a 75-85% yield mediated by the use of agents such as 1-ethyl-3-(3'-dimethylaminopropyl)-carbodiimide hydrochloride and 1-hydroxybenzotriazole. The operational simplicity of this method and the good yield of products make it valuable for the synthesis of new compounds with pharmacological activity.
Resumo:
The design methods and languages targeted to modern System-on-Chip designs are facing tremendous pressure of the ever-increasing complexity, power, and speed requirements. To estimate any of these three metrics, there is a trade-off between accuracy and abstraction level of detail in which a system under design is analyzed. The more detailed the description, the more accurate the simulation will be, but, on the other hand, the more time consuming it will be. Moreover, a designer wants to make decisions as early as possible in the design flow to avoid costly design backtracking. To answer the challenges posed upon System-on-chip designs, this thesis introduces a formal, power aware framework, its development methods, and methods to constraint and analyze power consumption of the system under design. This thesis discusses on power analysis of synchronous and asynchronous systems not forgetting the communication aspects of these systems. The presented framework is built upon the Timed Action System formalism, which offer an environment to analyze and constraint the functional and temporal behavior of the system at high abstraction level. Furthermore, due to the complexity of System-on-Chip designs, the possibility to abstract unnecessary implementation details at higher abstraction levels is an essential part of the introduced design framework. With the encapsulation and abstraction techniques incorporated with the procedure based communication allows a designer to use the presented power aware framework in modeling these large scale systems. The introduced techniques also enable one to subdivide the development of communication and computation into own tasks. This property is taken into account in the power analysis part as well. Furthermore, the presented framework is developed in a way that it can be used throughout the design project. In other words, a designer is able to model and analyze systems from an abstract specification down to an implementable specification.
Resumo:
In many industries, such as petroleum production, and the petrochemical, metal, food and cosmetics industries, wastewaters containing an emulsion of oil in water are often produced. The emulsions consist of water (up to 90%), oils (mineral, animal, vegetable and synthetic), surfactants and other contaminates. In view of its toxic nature and its deleterious effects on the surrounding environment (soil, water) such wastewater needs to be treated before release into natural water ways. Membrane-based processes have successfully been applied in industrial applications and are considered as possible candidates for the treatment of oily wastewaters. Easy operation, lower cost, and in some cases, the ability to reduce contaminants below existing pollution limits are the main advantages of these systems. The main drawback of membranes is flux decline due tofouling and concentration polarisation. The complexity of oil-containing systems demands complementary studies on issues related to the mitigation of fouling and concentration polarisation in membranebased ultrafiltration. In this thesis the effect of different operating conditions (factors) on ultrafiltration of oily water is studied. Important factors are normally correlated and, therefore, their effect should be studied simultaneously. This work uses a novel approach to study different operating conditions, like pressure, flow velocity, and temperature, and solution properties, like oil concentration (cutting oil, diesel, kerosene), pH, and salt concentration (CaCl2 and NaCl)) in the ultrafiltration of oily water, simultaneously and in a systematic way using an experimental design approach. A hypothesis is developed to describe the interaction between the oil drops, salt and the membrane surface. The optimum conditions for ultrafiltration and the contribution of each factor in the ultrafiltration of oily water are evaluated. It is found that the effect on permeate flux of the various factors studied strongly depended on the type of oil, the type of membrane and the amount of salts. The thesis demonstrates that a system containing oil is very complex, and that fouling and flux decline can be observed even at very low pressures. This means that only the weak form of the critical flux exists for such systems. The cleaning of the fouled membranes and the influence of different parameters (flow velocity, temperature, time, pressure, and chemical concentration (SDS, NaOH)) were evaluated in this study. It was observed that fouling, and consequently cleaning, behaved differently for the studied membranes. Of the membranes studied, the membrane with the lowest propensity for fouling and the most easily cleaned was the regenerated cellulose membrane (C100H). In order to get more information about the interaction between the membrane and the components of the emulsion, a streaming potential study was performed on the membrane. The experiments were carried out at different pH and oil concentration. It was seen that oily water changed the surface charge of the membrane significantly. The surface charge and the streaming potential during different stages of filtration were measured and analysed being a new method for fouling of oil in this thesis. The surface charge varied in different stages of filtration. It was found that the surface charge of a cleaned membrane was not the same as initially; however, the permeability was equal to that of a virgin membrane. The effect of filtration mode was studied by performing the filtration in both cross-flow and deadend mode. The effect of salt on performance was considered in both studies. It was found that salt decreased the permeate flux even at low concentration. To test the effect of hydrophilicity change, the commercial membranes used in this thesis were modified by grafting (PNIPAAm) on their surfaces. A new technique (corona treatment) was used for this modification. The effect of modification on permeate flux and retention was evaluated. The modified membranes changed their pore size around 33oC resulting in different retention and permeability. The obtained results in this thesis can be applied to optimise the operation of a membrane plant under normal or shock conditions or to modify the process such that it becomes more efficient or effective.
Resumo:
Seed-assisted synthesis of zeolites diminishes crystallization time and enables the industrial use of certain zeolites, which was conventionally unfeasible due to the complexity of synthesis and the cost of organic structure-directing agents. This study reports the primary results of zeolite crystallization in the presence of seeds, which are used as a substitute for organic compounds.
Resumo:
The size and complexity of projects in the software development are growing very fast. At the same time, the proportion of successful projects is still quite low according to the previous research. Although almost every project's team knows main areas of responsibility which would help to finish project on time and on budget, this knowledge is rarely used in practice. So it is important to evaluate the success of existing software development projects and to suggest a method for evaluating success chances which can be used in the software development projects. The main aim of this study is to evaluate the success of projects in the selected geographical region (Russia-Ukraine-Belarus). The second aim is to compare existing models of success prediction and to determine their strengths and weaknesses. Research was done as an empirical study. A survey with structured forms and theme-based interviews were used as the data collection methods. The information gathering was done in two stages. At the first stage, project manager or someone with similar responsibilities answered the questions over Internet. At the second stage, the participant was interviewed; his or her answers were discussed and refined. It made possible to get accurate information about each project and to avoid errors. It was found out that there are many problems in the software development projects. These problems are widely known and were discussed in literature many times. The research showed that most of the projects have problems with schedule, requirements, architecture, quality, and budget. Comparison of two models of success prediction presented that The Standish Group overestimates problems in project. At the same time, McConnell's model can help to identify problems in time and avoid troubles in future. A framework for evaluating success chances in distributed projects was suggested. The framework is similar to The Standish Group model but it was customized for distributed projects.
Resumo:
This work aimed to describe the foliar anatomy of seven species of Eucalyptus, emphasizing the characterization of secretory structures and the chemical nature of the compounds secreted and /or present in the leaves. Anatomical characterization and histochemical evaluation to determine the nature and localization of the secondary compounds were carried out in fully expanded leaves, according to standard methodology. Anatomical differences were verified among the species studied, especially in E. pyrocarpa. Sub-epidermal cavities were the only secretory structures found in the seven species studied, with higher density in E. pellita and lower in E. pilularis. The following compounds were histochemically detected: lipophilic compounds, specifically lipids of the essential or resin-oil type and sesquiterpene lactones found in the lumen of the cavities of the seven species; and hydrophilic compounds, of the phenolic compound type found in the mesophyll of all the species studied and on the epidermis of some of them. The results confirmed the complexity of the product secreted by the cavities, stressing the homogeneous histochemistry nature of these compounds among the species. However, the phenolic compounds results may be an indication of important variations in adaptations and ecological relations, since they show differences among the species.
Resumo:
As technology geometries have shrunk to the deep submicron regime, the communication delay and power consumption of global interconnections in high performance Multi- Processor Systems-on-Chip (MPSoCs) are becoming a major bottleneck. The Network-on- Chip (NoC) architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication issues such as performance limitations of long interconnects and integration of large number of Processing Elements (PEs) on a chip. The choice of routing protocol and NoC structure can have a significant impact on performance and power consumption in on-chip networks. In addition, building a high performance, area and energy efficient on-chip network for multicore architectures requires a novel on-chip router allowing a larger network to be integrated on a single die with reduced power consumption. On top of that, network interfaces are employed to decouple computation resources from communication resources, to provide the synchronization between them, and to achieve backward compatibility with existing IP cores. Three adaptive routing algorithms are presented as a part of this thesis. The first presented routing protocol is a congestion-aware adaptive routing algorithm for 2D mesh NoCs which does not support multicast (one-to-many) traffic while the other two protocols are adaptive routing models supporting both unicast (one-to-one) and multicast traffic. A streamlined on-chip router architecture is also presented for avoiding congested areas in 2D mesh NoCs via employing efficient input and output selection. The output selection utilizes an adaptive routing algorithm based on the congestion condition of neighboring routers while the input selection allows packets to be serviced from each input port according to its congestion level. Moreover, in order to increase memory parallelism and bring compatibility with existing IP cores in network-based multiprocessor architectures, adaptive network interface architectures are presented to use multiple SDRAMs which can be accessed simultaneously. In addition, a smart memory controller is integrated in the adaptive network interface to improve the memory utilization and reduce both memory and network latencies. Three Dimensional Integrated Circuits (3D ICs) have been emerging as a viable candidate to achieve better performance and package density as compared to traditional 2D ICs. In addition, combining the benefits of 3D IC and NoC schemes provides a significant performance gain for 3D architectures. In recent years, inter-layer communication across multiple stacked layers (vertical channel) has attracted a lot of interest. In this thesis, a novel adaptive pipeline bus structure is proposed for inter-layer communication to improve the performance by reducing the delay and complexity of traditional bus arbitration. In addition, two mesh-based topologies for 3D architectures are also introduced to mitigate the inter-layer footprint and power dissipation on each layer with a small performance penalty.
Resumo:
Software systems are expanding and becoming increasingly present in everyday activities. The constantly evolving society demands that they deliver more functionality, are easy to use and work as expected. All these challenges increase the size and complexity of a system. People may not be aware of a presence of a software system, until it malfunctions or even fails to perform. The concept of being able to depend on the software is particularly significant when it comes to the critical systems. At this point quality of a system is regarded as an essential issue, since any deficiencies may lead to considerable money loss or life endangerment. Traditional development methods may not ensure a sufficiently high level of quality. Formal methods, on the other hand, allow us to achieve a high level of rigour and can be applied to develop a complete system or only a critical part of it. Such techniques, applied during system development starting at early design stages, increase the likelihood of obtaining a system that works as required. However, formal methods are sometimes considered difficult to utilise in traditional developments. Therefore, it is important to make them more accessible and reduce the gap between the formal and traditional development methods. This thesis explores the usability of rigorous approaches by giving an insight into formal designs with the use of graphical notation. The understandability of formal modelling is increased due to a compact representation of the development and related design decisions. The central objective of the thesis is to investigate the impact that rigorous approaches have on quality of developments. This means that it is necessary to establish certain techniques for evaluation of rigorous developments. Since we are studying various development settings and methods, specific measurement plans and a set of metrics need to be created for each setting. Our goal is to provide methods for collecting data and record evidence of the applicability of rigorous approaches. This would support the organisations in making decisions about integration of formal methods into their development processes. It is important to control the software development, especially in its initial stages. Therefore, we focus on the specification and modelling phases, as well as related artefacts, e.g. models. These have significant influence on the quality of a final system. Since application of formal methods may increase the complexity of a system, it may impact its maintainability, and thus quality. Our goal is to leverage quality of a system via metrics and measurements, as well as generic refinement patterns, which are applied to a model and a specification. We argue that they can facilitate the process of creating software systems, by e.g. controlling complexity and providing the modelling guidelines. Moreover, we find them as additional mechanisms for quality control and improvement, also for rigorous approaches. The main contribution of this thesis is to provide the metrics and measurements that help in assessing the impact of rigorous approaches on developments. We establish the techniques for the evaluation of certain aspects of quality, which are based on structural, syntactical and process related characteristics of an early-stage development artefacts, i.e. specifications and models. The presented approaches are applied to various case studies. The results of the investigation are juxtaposed with the perception of domain experts. It is our aspiration to promote measurements as an indispensable part of quality control process and a strategy towards the quality improvement.
Resumo:
The wars the Western armies are involved with today are different from those that were fought in the end of 20th century. To explain this change, the Western military thinkers have come up with various different types of definitions of warfare over the last 30 years, each describing the tendencies involved in the conflicts of the time. The changing nature of conflicts surfaced a new term – hybrid warfare. The term was to describe and explain the multi-modality and complexity of modern day conflict. This thesis seeks the answer for the question: what is the development of thought behind hybrid warfare? In this thesis the Vietnam War (1965-1975) is used as an example of compound warfare focusing on the American involvement in the war. The Second Lebanon War (2006) serves as an example of hybrid warfare. Both case studies include an irregular opposing force, namely National Liberation Front in Vietnam War and Hezbollah in the Second Lebanon War. These two case studies are compared with the term full spectrum operations introduced in the current U.S. Department of Army Field Manual No. 3-0 Operations to see the differences and similarities of each term. The perspective of this thesis is the American point of view. This thesis concludes that hybrid warfare, compound warfare and full spectrum operations are very similar. The first two terms are included in the last one. Although hybrid warfare is not officially defined, it will most likely remain to be used in the discussion in the future, since hybrid wars and hybrid threats are officially accepted terms.
Resumo:
The objective of this study was to simulate the impact of elevated temperature scenarios on leaf development of potato in Santa Maria, RS, Brazil. Leaf appearance was estimated using a multiplicative model that has a non-linear temperature response function which calculates the daily leaf appearance rate (LAR, leaves day-1) and the accumulated number of leaves (LN) from crop emergence to the appearance of the upper last leaf. Leaf appearance was estimated during 100 years in the following scenarios: current climate, +1 °C, +2 °C, +3 °C, +4 °C e +5 °C. The LAR model was estimated with coefficients of the Asterix cultivar in five emergence dates and in two growing seasons (Fall and Spring). Variable of interest was the duration (days) of the crop emergence to the appearance of the final leaf number (EM-FLN) phase. Statistical analysis was performed assuming a three-factorial experiment, with main effects being climate scenarios, growing seasons, and emergence dates in a completely randomized design using years (one hundred) as replications. The results showed that warmer scenarios lead to an increase, in the fall, and a decrease, in the spring growing season, in the duration of the leaf appearance phase, indicating high vulnerability and complexity of the response of potato crop grown in a Subtropical environment to climate change.
Resumo:
The draft forces of soil engaging tines and theoretical analysis compared to existing mathematical models, have yet not been studied in Rio Grande do Sul soils. From the existing models, those which can get the closest fitting draft forces to real measure on field have been established for two of Rio Grande do Sul soils. An Albaqualf and a Paleudult were evaluated. From the studied models, those suggested by Reece, so called "Universal Earthmoving Equation", Hettiaratchi and Reece, and Godwin and Spoor were the best fitting ones, comparing the calculated results with those measured "in situ". Allowing for the less complexity of Reece's model, it is suggested that this model should be used for modeling draft forces prediction for narrow tines in Albaqualf and Paleudut.