976 resultados para time constraints


Relevância:

60.00% 60.00%

Publicador:

Resumo:

In future power systems, in the smart grid and microgrids operation paradigms, consumers can be seen as an energy resource with decentralized and autonomous decisions in the energy management. It is expected that each consumer will manage not only the loads, but also small generation units, heating systems, storage systems, and electric vehicles. Each consumer can participate in different demand response events promoted by system operators or aggregation entities. This paper proposes an innovative method to manage the appliances on a house during a demand response event. The main contribution of this work is to include time constraints in resources management, and the context evaluation in order to ensure the required comfort levels. The dynamic resources management methodology allows a better resources’ management in a demand response event, mainly the ones of long duration, by changing the priorities of loads during the event. A case study with two scenarios is presented considering a demand response with 30 min duration, and another with 240 min (4 h). In both simulations, the demand response event proposes the power consumption reduction during the event. A total of 18 loads are used, including real and virtual ones, controlled by the presented house management system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Energy consumption is one of the major issues for modern embedded systems. Early, power saving approaches mainly focused on dynamic power dissipation, while neglecting the static (leakage) energy consumption. However, technology improvements resulted in a case where static power dissipation increasingly dominates. Addressing this issue, hardware vendors have equipped modern processors with several sleep states. We propose a set of leakage-aware energy management approaches that reduce the energy consumption of embedded real-time systems while respecting the real-time constraints. Our algorithms are based on the race-to-halt strategy that tends to run the system at top speed with an aim to create long idle intervals, which are used to deploy a sleep state. The effectiveness of our algorithms is illustrated with an extensive set of simulations that show an improvement of up to 8% reduction in energy consumption over existing work at high utilization. The complexity of our algorithms is smaller when compared to state-of-the-art algorithms. We also eliminate assumptions made in the related work that restrict the practical application of the respective algorithms. Moreover, a novel study about the relation between the use of sleep intervals and the number of pre-emptions is also presented utilizing a large set of simulation results, where our algorithms reduce the experienced number of pre-emptions in all cases. Our results show that sleep states in general can save up to 30% of the overall number of pre-emptions when compared to the sleep-agnostic earliest-deadline-first algorithm.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The smart grid concept is a key issue in the future power systems, namely at the distribution level, with deep concerns in the operation and planning of these systems. Several advantages and benefits for both technical and economic operation of the power system and of the electricity markets are recognized. The increasing integration of demand response and distributed generation resources, all of them mostly with small scale distributed characteristics, leads to the need of aggregating entities such as Virtual Power Players. The operation business models become more complex in the context of smart grid operation. Computational intelligence methods can be used to give a suitable solution for the resources scheduling problem considering the time constraints. This paper proposes a methodology for a joint dispatch of demand response and distributed generation to provide energy and reserve by a virtual power player that operates a distribution network. The optimal schedule minimizes the operation costs and it is obtained using a particle swarm optimization approach, which is compared with a deterministic approach used as reference methodology. The proposed method is applied to a 33-bus distribution network with 32 medium voltage consumers and 66 distributed generation units.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Inbreeding avoidance is predicted to induce sex biases in dispersal. But which sex should disperse? In polygynous species, females pay higher costs to inbreeding and thus might be expected to disperse more, but empirical evidence consistently reveals male biases. Here, we show that theoretical expectations change drastically if females are allowed to avoid inbreeding via kin recognition. At high inbreeding loads, females should prefer immigrants over residents, thereby boosting male dispersal. At lower inbreeding loads, by contrast, inclusive fitness benefits should induce females to prefer relatives, thereby promoting male philopatry. This result points to disruptive effects of sexual selection. The inbreeding load that females are ready to accept is surprisingly high. In absence of search costs, females should prefer related partners as long as delta<r/(1+r) where r is relatedness and delta is the fecundity loss relative to an outbred mating. This amounts to fitness losses up to one-fifth for a half-sib mating and one-third for a full-sib mating, which lie in the upper range of inbreeding depression values currently reported in natural populations. The observation of active inbreeding avoidance in a polygynous species thus suggests that inbreeding depression exceeds this threshold in the species under scrutiny or that inbred matings at least partly forfeit other mating opportunities for males. Our model also shows that female choosiness should decline rapidly with search costs, stemming from, for example, reproductive delays. Species under strong time constraints on reproduction should thus be tolerant of inbreeding.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A Czerny Mount double monochromator is used to measure Raman scattered radiation near 90" from a crystalline, Silicon sample. Incident light is provided by a mixed gas Kr-Ar laser, operating at 5145 A. The double monochromator is calibrated to true wavelength by comparison of Kr and Ar emission Une positions (A) to grating position (A) display [1]. The relationship was found to be hnear and can be described by, y = 1.219873a; - 1209.32, (1) where y is true wavelength (A) and xis grating position display (A). The Raman emission spectra are collected via C"*""*" encoded software, which displays a mV signal from a Photodetector and allows stepping control of the gratings via an A/D interface. [2] The software collection parameters, detector temperature and optics are optimised to yield the best quality spectra. The inclusion of a cryostat allows for temperatmre dependent capabihty ranging from 4 K to w 350 K. Silicon Stokes temperatm-e dependent Raman spectra, generally show agreement with Uterature results [3] in their frequency haxdening, FWHM reduction and intensity increase as temperature is reduced. Tests reveal that a re-alignment of the double monochromator is necessary before spectral resolution can approach literature standard. This has not yet been carried out due to time constraints.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

There were three purposes to this study. The first purpose was to determine how learning can be influenced by various factors i~ the rock climbing experience. The second purpose was to examine what people can learn from the rock climbing experience. The third purpose was to investigate whether that learning can transfer from the rock climbing experience to the subjects' real life in the workplace. Ninety employees from a financial corporation in the Niagara Region volunteered for this study. All subjects were surveyed throughout a one-day treatment. Ten were purposefully selected one month later for interviews. Ten themes emerged from the subjects in terms of what was learned. Inspiration, motivation, and determination, preparation, goals and limitations, perceptions and expectations, confidence and risk taking, trust and support, teamwork, feedback and encouragement, learning from failure, and finally, skills and flow. All participants were able to transfer what was learned back to the workplace. The results of this study suggested that subjects' learning was influenced by their ability to: take risks in a safe environment, fail without penalty, support each other, plan without time constraints, and enjoy the company of fellow workers that they wouldn't normally associate with. Future directions for research should include different types of treatments such as white water rafting, sky diving, tall ship sailing, or caving.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The purpose of this research is to describe the journey towards Comprehensive School Health at two Aboriginal elementary schools. An advocate and a healthy schools committee were identified at both schools and were responsible for developing initiatives to create a healthy school community. A case study was used to gather an in-depth understanding of Comprehensive School Health for the two schools involved. As a researcher, I functioned within the role of a participantobserver, as I was actively involved in the programs and initiatives completed in both schools. The research process included: the pilot study, ethics clearance and distribution of letters of invitation and consent forms. Data collection included 16 semi-structured, guided interviews with principals, teachers, and stupents. Participant observations included sites of the gymnasium, classroom, playgrounds, school environments, bulletin boards as well as artifact analysis of decuments such as school newsletters, physical education schedules and school handbooks. The interviews were transcribed and coded using an inductive approach which involves finding patterns, themes and categories from the data (patton, 2002). Research questions guided the findings as physical activity, physical education, nutrition and transportation were discussed. Themes developed t~rough coding were teacherstudent interactions, cultural traditions, time constraints and professional development and were discussed using a Comprehensive School Health framework.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background. West Nile Virus (WNV), a mosquito-borne flavivirus, is one of an increasing number of infectious diseases that have been emerging or re-emerging in the last two decades. Since the arrival ofWNV to Canada to present date, the Niagara Region has only reported 30 clinical cases, a small number compared to the hundreds reported in other regions of similar conditions. Moreover, the last reported human case in Niagara was in 2006. As it has been demonstrated that the majority of WNV infections are asymptomatic, the question remains whether the lack of clinical cases in Niagara truly reflects the lack of transmission to humans or if infections are still occurring but are mostly asymptomatic. Objectives. The general objective of this study was to establish whether or not active WNV transmission could be detected in a human population residing in Niagara for the 2007 transmission season. To fullfil this objective, a cross-sectional seroprevalence study was designed to investigate for the presence of anti-WNV antibodies in a sample of Mexican migrant agricultural workers employed in farms registered with the Seasonal Agricultural Workers Program (SAWP). Due to the Mexican origin of the study participants, three specific research objectives were proposed: a) determine the seroprevalence ofanti-WNV antibodies as well as anti-Dengue virus antibodies (a closely related virus prevalent in Mexico and likely to confound WNV serology); b) analyze risk factors associated with WNV and Dengue virus seropositivity; and c) assess the awareness of study participants about WNV infection as well as their understanding of the mode of transmission and clinical importance of the infection. Methodology: After obtaining ethics clearance from Brock University, farms were visited and workers invited to participate. Due to time constraints, only a small number of farms were enrolled with a resulting convenience and non-randomized study sample. Workers' demographic and epidemiological data were collected using a standardized questionnaire and blood samples were drawn to determine serum anti-WNV and anti- Dengue antibodies with a commercial ELISA. All positive samples were sent to the National Microbiology Laboratory in Winnipeg, Manitoba for confirmation with the Plaque Reduction Neutralization Test (PRNT). Data was analyzed with Stata 10.0. Antibody determinations were reported as seroprevalence proportions for both WNV and Dengue. Logistic regression was used to analyze risk factors that may be associated with seropositivity and awareness was reported as a proportion of the number of individuals possessing awareness over the total number of participants. Results and Discussion. In total 92 participants working in 5 farms completed the study. Using the commercial ELISA, seropositivity was as follows: 2.2% for WNV IgM, 20.7% for WNV IgG, and 17.1 % for Dengue IgG. Possible cross-reactivity was demonstrated in 15/20 (75.0%) samples that were positive for both WNV IgG and Dengue IgG. Confirmatory testing with the PRNT demonstrated that none of the WNV ELISA positive samples had antibodies to WNV but 13 samples tested positive for anti-Dengue antibodies (14.1 % Dengue sereoprevalence). The findings showed that the ELISA performance was very poor for assessing anti-WNV antibodies in individuals previously exposed to Dengue virus. However, the ELISA had better sensitivity and specificity for assessing anti-Dengue antibodies. Whereas statistical analysis could not be done for WNV seropositivity, as all samples were PRNT negative, logistic regression demonstrated several risk factors for Dengue exposure_ The first year coming to Canada appeared to be significantly associated with increased exposure to Dengue while lower socio-economic housing and the presence of a water basin in the yard in Mexico appeared to be significantly associated with a decreased exposure to Dengue_ These seemingly contradictory results illustrate that in mobile populations such as migrant workers, risk factors for exposure to Dengue are not easily identified and more research is needed. Assessing the awareness of WNV and its clinical importance showed that only 23% of participants had some knowledge of WNV, of which 76% knew that the infection was mosquito-borne and 47% recognized fever as a symptom. The identified lack of understanding and awareness was not surprising since WNV is not a visible disease in Mexico. Since WNV persists in an enzootic cycle in Niagara and the occurrence of future outbreaks is unpredictable, the agricultural workers remain at risk for transmission. Therefore it important they receive sufficient health education regarding WNV before leaving Mexico and during their stay in Canada. Conclusions. Human transmission of WNV could not be proven among the study participants even when due to their occupation they are at high risk for mosquito bites. The limitations of the study sample do not permit generalizable conclusions, however, the study findings are consistent with the absence of clinical cases in the Niagara Region, so it is likely that human transmission is indeed neglible or absent. As evidenced by our WNV serology results, PRNT must be utilized as a confirmatory test since false positivity occurs frequently. This is especially true when previous exposure to Dengue virus is likely.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Contexte. Les phénotypes ABO et Rh(D) des donneurs de sang ainsi que des patients transfusés sont analysés de façon routinière pour assurer une complète compatibilité. Ces analyses sont accomplies par agglutination suite à une réaction anticorps-antigènes. Cependant, pour des questions de coûts et de temps d’analyses faramineux, les dons de sang ne sont pas testés sur une base routinière pour les antigènes mineurs du sang. Cette lacune peut résulter à une allo-immunisation des patients receveurs contre un ou plusieurs antigènes mineurs et ainsi amener des sévères complications pour de futures transfusions. Plan d’étude et Méthodes. Pour ainsi aborder le problème, nous avons produit un panel génétique basé sur la technologie « GenomeLab _SNPstream» de Beckman Coulter, dans l’optique d’analyser simultanément 22 antigènes mineurs du sang. La source d’ADN provient des globules blancs des patients préalablement isolés sur papiers FTA. Résultats. Les résultats démontrent que le taux de discordance des génotypes, mesuré par la corrélation des résultats de génotypage venant des deux directions de l’ADN, ainsi que le taux d’échec de génotypage sont très bas (0,1%). Également, la corrélation entre les résultats de phénotypes prédit par génotypage et les phénotypes réels obtenus par sérologie des globules rouges et plaquettes sanguines, varient entre 97% et 100%. Les erreurs expérimentales ou encore de traitement des bases de données ainsi que de rares polymorphismes influençant la conformation des antigènes, pourraient expliquer les différences de résultats. Cependant, compte tenu du fait que les résultats de phénotypages obtenus par génotypes seront toujours co-vérifiés avant toute transfusion sanguine par les technologies standards approuvés par les instances gouvernementales, les taux de corrélation obtenus sont de loin supérieurs aux critères de succès attendus pour le projet. Conclusion. Le profilage génétique des antigènes mineurs du sang permettra de créer une banque informatique centralisée des phénotypes des donneurs, permettant ainsi aux banques de sang de rapidement retrouver les profiles compatibles entre les donneurs et les receveurs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Le taux de participation sportive chez les jeunes canadiens décroît d’année en année et tend à diminuer avec l’âge. La chute plus marquée du taux de pratique sportive lors de l’entrée au secondaire est notamment bien documentée. Cette étude s’intéresse aux obstacles et aux raisons de la pratique sportive ainsi qu’au plaisir ressenti en sport et dans le contexte du cours d’éducation physique. Une méthodologie quantitative a été utilisée pour cette étude. La collecte des données a été réalisée via une enquête par questionnaire accessible en ligne. L’échantillon de convenance est composé de 2001 élèves francophones de la première à la cinquième secondaire qui fréquentaient, au moment de l’étude, un établissement scolaire canadien et francophone. Parmi ces jeunes, 367 d’entre eux ont été inclus dans la catégorie des sportifs (fréquence minimale de pratique sportive de quatre heures par semaine), 241 ex-sportifs et 142 non-sportifs, pour un total de 750 répondants inclus aux analyses. Les résultats indiquent que, de façon générale, l’obstacle le plus important vise les contraintes temporelles, alors que les raisons les plus importantes renvoient à la sensation de plaisir ressenti lors de l’activité sportive elle-même. Les résultats suggèrent par ailleurs des différences selon le type de sportif, l’âge et le sexe des jeunes. Les sportifs ressentiraient un plaisir sportif supérieur à celui des ex-sportifs lors de la pratique sportive et dans le cours d’éducation physique. Les filles sont plus enclines que les garçons à considérer les obstacles inclus dans cette étude comme des obstacles à la pratique sportive. Les obstacles semblent également être de plus en plus nombreux avec l’âge. Bien qu’une corrélation significative ait été observée entre le plaisir sportif en éducation physique et dans la pratique sportive organisée, les résultats indiquent que le plaisir sportif en éducation physique est plus élevé que le plaisir ressenti dans la pratique sportive organisée.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One major component of power system operation is generation scheduling. The objective of the work is to develop efficient control strategies to the power scheduling problems through Reinforcement Learning approaches. The three important active power scheduling problems are Unit Commitment, Economic Dispatch and Automatic Generation Control. Numerical solution methods proposed for solution of power scheduling are insufficient in handling large and complex systems. Soft Computing methods like Simulated Annealing, Evolutionary Programming etc., are efficient in handling complex cost functions, but find limitation in handling stochastic data existing in a practical system. Also the learning steps are to be repeated for each load demand which increases the computation time.Reinforcement Learning (RL) is a method of learning through interactions with environment. The main advantage of this approach is it does not require a precise mathematical formulation. It can learn either by interacting with the environment or interacting with a simulation model. Several optimization and control problems have been solved through Reinforcement Learning approach. The application of Reinforcement Learning in the field of Power system has been a few. The objective is to introduce and extend Reinforcement Learning approaches for the active power scheduling problems in an implementable manner. The main objectives can be enumerated as:(i) Evolve Reinforcement Learning based solutions to the Unit Commitment Problem.(ii) Find suitable solution strategies through Reinforcement Learning approach for Economic Dispatch. (iii) Extend the Reinforcement Learning solution to Automatic Generation Control with a different perspective. (iv) Check the suitability of the scheduling solutions to one of the existing power systems.First part of the thesis is concerned with the Reinforcement Learning approach to Unit Commitment problem. Unit Commitment Problem is formulated as a multi stage decision process. Q learning solution is developed to obtain the optimwn commitment schedule. Method of state aggregation is used to formulate an efficient solution considering the minimwn up time I down time constraints. The performance of the algorithms are evaluated for different systems and compared with other stochastic methods like Genetic Algorithm.Second stage of the work is concerned with solving Economic Dispatch problem. A simple and straight forward decision making strategy is first proposed in the Learning Automata algorithm. Then to solve the scheduling task of systems with large number of generating units, the problem is formulated as a multi stage decision making task. The solution obtained is extended in order to incorporate the transmission losses in the system. To make the Reinforcement Learning solution more efficient and to handle continuous state space, a fimction approximation strategy is proposed. The performance of the developed algorithms are tested for several standard test cases. Proposed method is compared with other recent methods like Partition Approach Algorithm, Simulated Annealing etc.As the final step of implementing the active power control loops in power system, Automatic Generation Control is also taken into consideration.Reinforcement Learning has already been applied to solve Automatic Generation Control loop. The RL solution is extended to take up the approach of common frequency for all the interconnected areas, more similar to practical systems. Performance of the RL controller is also compared with that of the conventional integral controller.In order to prove the suitability of the proposed methods to practical systems, second plant ofNeyveli Thennal Power Station (NTPS IT) is taken for case study. The perfonnance of the Reinforcement Learning solution is found to be better than the other existing methods, which provide the promising step towards RL based control schemes for practical power industry.Reinforcement Learning is applied to solve the scheduling problems in the power industry and found to give satisfactory perfonnance. Proposed solution provides a scope for getting more profit as the economic schedule is obtained instantaneously. Since Reinforcement Learning method can take the stochastic cost data obtained time to time from a plant, it gives an implementable method. As a further step, with suitable methods to interface with on line data, economic scheduling can be achieved instantaneously in a generation control center. Also power scheduling of systems with different sources such as hydro, thermal etc. can be looked into and Reinforcement Learning solutions can be achieved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis Entitled INVESTIGATIONS ON THE STRUCTURAL, OPTICAL AND MAGNETIC PROPERTIES OF NANOSTRUCTURED CERIUM OXIDE IN PURE AND DOPED FORMS AND ITS POLYMER NANOCOMPOSITES.Synthesis and processing of nanomatelials and nanostmctures are the essential aspects of nanotechnology. Studies on new physical properties and applications of nanomaterials and nanostructures are possible only when nanostructured materials are made available with desired size, morphology,crystal structure and chemical composition.Recently, several methods have been developed to prepare pure and doped CeO2 powder, including wet chemical synthesis, thermal hydrolysis, flux method, hydrothermal synthesis, gas condensation method, microwave technique etc. In all these, some special reaction conditions, such as high temperature, high pressure, capping agents, expensive or toxic solvents etc. have been involved.Another hi gh-li ght of the present work is room temperature ferromagnetism in cerium oxdie thin films deposited by spray pyrolysis technique.The observation of self trapped exciton mediated PL in ceria nanocrystals is another important outcome of the present study. STE mediated mechanism has been proposed for CeO2 nanocrystals based on the dependence of PL intensity on the annealing temperature. It would be interesting to extent these investigations to the doped forms of cerium oxide and cerium oxide thin films to get deeper Insight into STE mechanism.Due to time constraints detailed investigations could not be canied out on the preparation and properties of free standing films of polymer/ceria nanocomposites. It has been observed that good quality free standing films of PVDF/ceria, PS/C61‘l8, PMMA/ceria can be obtained using solution casting technique. These polymer nanocomposite films show high dielectric constant around 20 and offer prospects of applications as gate electrodes in metal-oxide semiconductor devices.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Unit Commitment Problem (UCP) in power system refers to the problem of determining the on/ off status of generating units that minimize the operating cost during a given time horizon. Since various system and generation constraints are to be satisfied while finding the optimum schedule, UCP turns to be a constrained optimization problem in power system scheduling. Numerical solutions developed are limited for small systems and heuristic methodologies find difficulty in handling stochastic cost functions associated with practical systems. This paper models Unit Commitment as a multi stage decision making task and an efficient Reinforcement Learning solution is formulated considering minimum up time /down time constraints. The correctness and efficiency of the developed solutions are verified for standard test systems

Relevância:

60.00% 60.00%

Publicador:

Resumo:

En este trabajo se describe la utilización de herramientas de software libre, básicamente GRASS y R, para obtener una serie de mapas de coberturas del suelo (1976-2006) a partir de imágenes de satélite Landsat MSS y Landsat TM. Se trata de un proyecto concedido a un año, por lo que se requería una metodología que permitiera realizar el análisis de forma rápida y sencilla, aún tratando de aplicar técnicas de clasificación avanzadas. Dada la complejidad del trabajo y la premura de tiempo, se ha tratado de automatizar gran parte del trabajo mediante diversos scripts con BASH y R. (...)