855 resultados para objective-based coordination
Resumo:
This work was carried out with the objective of evaluating the growth and development of honey weed (Leonurus sibiricus) based on days or thermal units (growing degree days). Thus, two independent trials were developed to quantify the phenological development and total dry mass accumulation in increasing or decreasing photoperiod conditions. Considering only one growing season, honey weed phenological development was perfectly fit to day scale or growing degree days, but with no equivalence between seasons, with the plants developing faster at increasing photoperiods, and flowering 100 days after seeding. Even day-time scale or thermal units were not able to estimate general honey weed phenology during the different seasons of the year. In any growing condition, honey weed plants were able to accumulate a total dry mass of over 50 g per plant. Dry mass accumulation was adequately fit to the growing degree days, with highlights to a base temperature of 10 ºC. Therefore, a higher environmental influence on species phenology and a lower environmental influence on growth (dry mass) were observed, showing thereby that other variables, such as the photoperiod, may potentially complement the mathematical models.
Resumo:
This work was carried out with the objective of evaluating growth and development of sourgrass (Digitaris insularis) based on days or thermal units (growing degree days - GDD). Two independent trials were developed aiming to quantify the species' phenological development and total dry matter accumulation in increasing or decreasing photoperiod conditions. Plants were grown in 4 L plastic pots, filled with commercial substrate, adequately fertilized. In each trial, nine growth evaluations were carried out, with three replicates. Phenological development of sourgrass was correctly fit to time scale in days or GDD, through linear equation of first degree. Sourgrass has slow initial growth, followed by exponential dry matter accumulation, in increasing photoperiod condition. Maximum total dry matter was 75 and 6 g per plant for increasing and decreasing photoperiod conditions, respectively. Thus, phenological development of sourgrass may be predicted by mathematical models based on days or GDD; however, it should be noted that other environmental variables interfere on the species' growth (mass accumulation), especially photoperiod.
Resumo:
This work was carried out with the objective of elaborating mathematical models to predict growth and development of purple nutsedge (Cyperus rotundus) based on days or accumulated thermal units (growing degree days). Thus, two independent trials were developed, the first with a decreasing photoperiod (March to July) and the second with an increasing photoperiod (August to November). In each trial, ten assessments of plant growth and development were performed, quantifying total dry matter and the species phenology. After that, phenology was fit to first degree equations, considering individual trials or their grouping. In the same way, the total dry matter was fit to logistic-type models. In all regressions four temporal scales possibilities were assessed for the x axis: accumulated days or growing degree days (GDD) with base temperatures (Tb) of 10, 12 and 15 oC. For both photoperiod conditions, growth and development of purple nutsedge were adequately fit to prediction mathematical models based on accumulated thermal units, highlighting Tb = 12 oC. Considering GDD calculated with Tb = 12 oC, purple nutsedge phenology may be predicted by y = 0.113x, while species growth may be predicted by y = 37.678/(1+(x/509.353)-7.047).
Resumo:
The objective of this project was to introduce a new software product to pulp industry, a new market for case company. An optimization based scheduling tool has been developed to allow pulp operations to better control their production processes and improve both production efficiency and stability. Both the work here and earlier research indicates that there is a potential for savings around 1-5%. All the supporting data is available today coming from distributed control systems, data historians and other existing sources. The pulp mill model together with the scheduler, allows what-if analyses of the impacts and timely feasibility of various external actions such as planned maintenance of any particular mill operation. The visibility gained from the model proves also to be a real benefit. The aim is to satisfy demand and gain extra profit, while achieving the required customer service level. Research effort has been put both in understanding the minimum features needed to satisfy the scheduling requirements in the industry and the overall existence of the market. A qualitative study was constructed to both identify competitive situation and the requirements vs. gaps on the market. It becomes clear that there is no such system on the marketplace today and also that there is room to improve target market overall process efficiency through such planning tool. This thesis also provides better overall understanding of the different processes in this particular industry for the case company.
Resumo:
Demand for the use of energy systems, entailing high efficiency as well as availability to harness renewable energy sources, is a key issue in order to tackling the threat of global warming and saving natural resources. Organic Rankine cycle (ORC) technology has been identified as one of the most promising technologies in recovering low-grade heat sources and in harnessing renewable energy sources that cannot be efficiently utilized by means of more conventional power systems. The ORC is based on the working principle of Rankine process, but an organic working fluid is adopted in the cycle instead of steam. This thesis presents numerical and experimental results of the study on the design of small-scale ORCs. Two main applications were selected for the thesis: waste heat re- covery from small-scale diesel engines concentrating on the utilization of the exhaust gas heat and waste heat recovery in large industrial-scale engine power plants considering the utilization of both the high and low temperature heat sources. The main objective of this work was to identify suitable working fluid candidates and to study the process and turbine design methods that can be applied when power plants based on the use of non-conventional working fluids are considered. The computational work included the use of thermodynamic analysis methods and turbine design methods that were based on the use of highly accurate fluid properties. In addition, the design and loss mechanisms in supersonic ORC turbines were studied by means of computational fluid dynamics. The results indicated that the design of ORC is highly influenced by the selection of the working fluid and cycle operational conditions. The results for the turbine designs in- dicated that the working fluid selection should not be based only on the thermodynamic analysis, but requires also considerations on the turbine design. The turbines tend to be fast rotating, entailing small blade heights at the turbine rotor inlet and highly supersonic flow in the turbine flow passages, especially when power systems with low power outputs are designed. The results indicated that the ORC is a potential solution in utilizing waste heat streams both at high and low temperatures and both in micro and larger scale appli- cations.
Resumo:
The objective of the present study was to determine the frequency at which people complain of any type of headache, and its relationship with sociodemographic characteristics and psychiatric comorbidity in São Paulo, Brazil. A three-step cluster sampling method was used to select 1,464 subjects aged 18 years or older. They were mainly from families of middle and upper socioeconomic levels living in the catchment area of Instituto de Psiquiatria. However, this area also contains some slums and shantytowns. The subjects were interviewed using the Brazilian version of the Composite International Diagnostic Interview version 1.1. (CIDI 1.1) by a lay trained interviewer. Answers to CIDI 1.1 questions allowed us to classify people according to their psychiatric condition and their headaches based on their own ideas about the nature of their illness. The lifetime prevalence of "a lot of problems with" headache was 37.4% (76.2% of which were attributed to use of medicines, drugs/alcohol, physical illness or trauma, and 23.8% attributed to nervousness, tension or mental illness). The odds ratio (OR) for headache among participants with "nervousness, tension or mental illness" was elevated for depressive episodes (OR, 2.1; 95%CI, 1.4-3.4), dysthymia (OR, 3.4; 95%CI, 1.6-7.4) and generalized anxiety disorder (OR, 4.3; 95%CI, 2.1-8.6), when compared with patients without headache. For "a lot of problems with" headaches attributed to medicines, drugs/alcohol, physical illness or trauma, the risk was also increased for dysthymia but not for generalized anxiety disorder. These data show a high association between headache and chronic psychiatric disorders in this Brazilian population sample.
Resumo:
Objective of the thesis is to create a value based pricing model for marine engines and study the feasibility of implementing such model in the sales organization of a specific segment in the case company’s marine division. Different pricing strategies, concept of “value”, and how perceptions of value can be influenced through value based marketing are presented as theoretical background for the value based pricing model. Forbis and Mehta’s Economic Value to Customer (EVC) was selected as framework to create the value based pricing model for marine engines. The EVC model is based on calculating and comparing life-cycle costs of the reference product and competing products, thus showing the quantifiable value of the company’s own product compared to competition. In the applied part of the thesis, the components of the EVC model are identified for a marine diesel engine, the components are explained, and an example calculation created in Excel is presented. When examining the possibilities to implement in practice a value based pricing strategy based on the EVC model, it was found that the lack of precise information on competing products is the single biggest obstacle to use EVC exactly as presented in the literature. It was also found that sometimes necessary communication channels are missing and that there is simply a lack of interest from some clients and product end-users part to spend time on studying the life-cycle costs of the product. Information on the company’s own products is however sufficient and the sales force is capable to communicate to sufficiently high executive levels in the client organizations. Therefore it is suggested to focus on quantifying and communicating the company’s own value proposition. The dynamic nature of the business environment (variance in applications in which engines are installed, different clients, competition, end-clients etc.) means also that each project should be created its own EVC calculation. This is demanding in terms of resources needed, thus it is suggested to concentrate on selected projects and buyers, and to clients where the necessary communication channels to right levels in the customer organization are available. Finally, it should be highlighted that as literature suggests, implementing a value based pricing strategy is not possible unless the whole business approach is value based.
Resumo:
Virtual environments and real-time simulators (VERS) are becoming more and more important tools in research and development (R&D) process of non-road mobile machinery (NRMM). The virtual prototyping techniques enable faster and more cost-efficient development of machines compared to use of real life prototypes. High energy efficiency has become an important topic in the world of NRMM because of environmental and economic demands. The objective of this thesis is to develop VERS based methods for research and development of NRMM. A process using VERS for assessing effects of human operators on the life-cycle efficiency of NRMM was developed. Human in the loop simulations are ran using an underground mining loader to study the developed process. The simulations were ran in the virtual environment of the Laboratory of Intelligent Machines of Lappeenranta University of Technology. A physically adequate real-time simulation model of NRMM was shown to be reliable and cost effective in testing of hardware components by the means of hardware-in-the-loop (HIL) simulations. A control interface connecting integrated electro-hydraulic energy converter (IEHEC) with virtual simulation model of log crane was developed. IEHEC consists of a hydraulic pump-motor and an integrated electrical permanent magnet synchronous motorgenerator. The results show that state of the art real-time NRMM simulators are capable to solve factors related to energy consumption and productivity of the NRMM. A significant variation between the test drivers is found. The results show that VERS can be used for assessing human effects on the life-cycle efficiency of NRMM. HIL simulation responses compared to that achieved with conventional simulation method demonstrate the advances and drawbacks of various possible interfaces between the simulator and hardware part of the system under study. Novel ideas for arranging the interface are successfully tested and compared with the more traditional one. The proposed process for assessing the effects of operators on the life-cycle efficiency will be applied for wider group of operators in the future. Driving styles of the operators can be analysed statistically from sufficient large result data. The statistical analysis can find the most life-cycle efficient driving style for the specific environment and machinery. The proposed control interface for HIL simulation need to be further studied. The robustness and the adaptation of the interface in different situations must be verified. The future work will also include studying the suitability of the IEHEC for different working machines using the proposed HIL simulation method.
Resumo:
The objective of the present study was to describe, for the first time in Brazil, the use by a non-ophthalmologist of a community-based marginal rotation procedure by a posterior approach in the indigenous population from the Upper Rio Negro basin. Seventy-three upper eyelids of 46 Indians (11 males and 35 females) with cicatricial upper eyelid entropion and trichiasis were operated in the Indian communities using a marginal rotational procedure by a posterior approach by a non-ophthalmologist physician who had general surgery experience but only an extremely short period (one week) of ophthalmic training. Subjects were reevaluated 6 months after surgery. Results were classified according to the presence and location of residual trichiasis and symptoms were assessed according to a three-level subjective scale (better, worse or no change). Fifty-six eyelids (76.7%) were free from trichiasis, whereas residual trichiasis was observed in 17 eyelids (23.3%) of 10 subjects. In these cases, trichiasis was either lateral or medial to the central portion of the lid. Of these 10 patients, only 4 reported that the surgery did not improve the irritative symptoms. We conclude that marginal rotation by a posterior approach is an effective and simple procedure with few complications, even when performed by non-specialists. Due to its simplicity the posterior approach is an excellent option for community-based upper eyelid entropion surgery.
Resumo:
The objective of this thesis is to develop and generalize further the differential evolution based data classification method. For many years, evolutionary algorithms have been successfully applied to many classification tasks. Evolution algorithms are population based, stochastic search algorithms that mimic natural selection and genetics. Differential evolution is an evolutionary algorithm that has gained popularity because of its simplicity and good observed performance. In this thesis a differential evolution classifier with pool of distances is proposed, demonstrated and initially evaluated. The differential evolution classifier is a nearest prototype vector based classifier that applies a global optimization algorithm, differential evolution, to determine the optimal values for all free parameters of the classifier model during the training phase of the classifier. The differential evolution classifier applies the individually optimized distance measure for each new data set to be classified is generalized to cover a pool of distances. Instead of optimizing a single distance measure for the given data set, the selection of the optimal distance measure from a predefined pool of alternative measures is attempted systematically and automatically. Furthermore, instead of only selecting the optimal distance measure from a set of alternatives, an attempt is made to optimize the values of the possible control parameters related with the selected distance measure. Specifically, a pool of alternative distance measures is first created and then the differential evolution algorithm is applied to select the optimal distance measure that yields the highest classification accuracy with the current data. After determining the optimal distance measures for the given data set together with their optimal parameters, all determined distance measures are aggregated to form a single total distance measure. The total distance measure is applied to the final classification decisions. The actual classification process is still based on the nearest prototype vector principle; a sample belongs to the class represented by the nearest prototype vector when measured with the optimized total distance measure. During the training process the differential evolution algorithm determines the optimal class vectors, selects optimal distance metrics, and determines the optimal values for the free parameters of each selected distance measure. The results obtained with the above method confirm that the choice of distance measure is one of the most crucial factors for obtaining higher classification accuracy. The results also demonstrate that it is possible to build a classifier that is able to select the optimal distance measure for the given data set automatically and systematically. After finding optimal distance measures together with optimal parameters from the particular distance measure results are then aggregated to form a total distance, which will be used to form the deviation between the class vectors and samples and thus classify the samples. This thesis also discusses two types of aggregation operators, namely, ordered weighted averaging (OWA) based multi-distances and generalized ordered weighted averaging (GOWA). These aggregation operators were applied in this work to the aggregation of the normalized distance values. The results demonstrate that a proper combination of aggregation operator and weight generation scheme play an important role in obtaining good classification accuracy. The main outcomes of the work are the six new generalized versions of previous method called differential evolution classifier. All these DE classifier demonstrated good results in the classification tasks.
Resumo:
Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.
Resumo:
The main objective of this Master’s thesis is to develop a cost allocation model for a leading food industry company in Finland. The goal is to develop an allocation method for fixed overhead expenses produced in a specific production unit and create a plausible tracking system for product costs. The second objective is to construct an allocation model and modify the created model to be suited for other units as well. Costs, activities, drivers and appropriate allocation methods are studied. This thesis is started with literature review of existing theory of ABC, inspecting cost information and then conducting interviews with officials to get a general view of the requirements for the model to be constructed. The familiarization of the company started with becoming acquainted with the existing cost accounting methods. The main proposals for a new allocation model were revealed through interviews, which were utilized in setting targets for developing the new allocation method. As a result of this thesis, an Excel-based model is created based on the theoretical and empiric data. The new system is able to handle overhead costs in more detail improving the cost awareness, transparency in cost allocations and enhancing products’ cost structure. The improved cost awareness is received by selecting the best possible cost drivers for this situation. Also the capacity changes are taken into consideration, such as usage of practical or normal capacity instead of theoretical is suggested to apply. Also some recommendations for further development are made about capacity handling and cost collection.
Resumo:
In the present study, we modeled a reaching task as a two-link mechanism. The upper arm and forearm motion trajectories during vertical arm movements were estimated from the measured angular accelerations with dual-axis accelerometers. A data set of reaching synergies from able-bodied individuals was used to train a radial basis function artificial neural network with upper arm/forearm tangential angular accelerations. The trained radial basis function artificial neural network for the specific movements predicted forearm motion from new upper arm trajectories with high correlation (mean, 0.9149-0.941). For all other movements, prediction was low (range, 0.0316-0.8302). Results suggest that the proposed algorithm is successful in generalization over similar motions and subjects. Such networks may be used as a high-level controller that could predict forearm kinematics from voluntary movements of the upper arm. This methodology is suitable for restoring the upper limb functions of individuals with motor disabilities of the forearm, but not of the upper arm. The developed control paradigm is applicable to upper-limb orthotic systems employing functional electrical stimulation. The proposed approach is of great significance particularly for humans with spinal cord injuries in a free-living environment. The implication of a measurement system with dual-axis accelerometers, developed for this study, is further seen in the evaluation of movement during the course of rehabilitation. For this purpose, training-related changes in synergies apparent from movement kinematics during rehabilitation would characterize the extent and the course of recovery. As such, a simple system using this methodology is of particular importance for stroke patients. The results underlie the important issue of upper-limb coordination.
Resumo:
The main objective of the present study was to upgrade a clinical gamma camera to obtain high resolution tomographic images of small animal organs. The system is based on a clinical gamma camera to which we have adapted a special-purpose pinhole collimator and a device for positioning and rotating the target based on a computer-controlled step motor. We developed a software tool to reconstruct the target’s three-dimensional distribution of emission from a set of planar projections, based on the maximum likelihood algorithm. We present details on the hardware and software implementation. We imaged phantoms and heart and kidneys of rats. When using pinhole collimators, the spatial resolution and sensitivity of the imaging system depend on parameters such as the detector-to-collimator and detector-to-target distances and pinhole diameter. In this study, we reached an object voxel size of 0.6 mm and spatial resolution better than 2.4 and 1.7 mm full width at half maximum when 1.5- and 1.0-mm diameter pinholes were used, respectively. Appropriate sensitivity to study the target of interest was attained in both cases. Additionally, we show that as few as 12 projections are sufficient to attain good quality reconstructions, a result that implies a significant reduction of acquisition time and opens the possibility for radiotracer dynamic studies. In conclusion, a high resolution single photon emission computed tomography (SPECT) system was developed using a commercial clinical gamma camera, allowing the acquisition of detailed volumetric images of small animal organs. This type of system has important implications for research areas such as Cardiology, Neurology or Oncology.
Resumo:
The power is still today an issue in wearable computing applications. The aim of the present paper is to raise awareness of the power consumption of wearable computing devices in specific scenarios to be able in the future to design energy efficient wireless sensors for context recognition in wearable computing applications. The approach is based on a hardware study. The objective of this paper is to analyze and compare the total power consumption of three representative wearable computing devices in realistic scenarios such as Display, Speaker, Camera and microphone, Transfer by Wi-Fi, Monitoring outdoor physical activity and Pedometer. A scenario based energy model is also developed. The Samsung Galaxy Nexus I9250 smartphone, the Vuzix M100 Smart Glasses and the SimValley Smartwatch AW-420.RX are the three devices representative of their form factors. The power consumption is measured using PowerTutor, an android energy profiler application with logging option and using unknown parameters so it is adjusted with the USB meter. The result shows that the screen size is the main parameter influencing the power consumption. The power consumption for an identical scenario varies depending on the wearable devices meaning that others components, parameters or processes might impact on the power consumption and further study is needed to explain these variations. This paper also shows that different inputs (touchscreen is more efficient than buttons controls) and outputs (speaker sensor is more efficient than display sensor) impact the energy consumption in different way. This paper gives recommendations to reduce the energy consumption in healthcare wearable computing application using the energy model.