983 resultados para Application programs


Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper presents the work in progress of an on-demand software deployment system based on application virtualization concepts which eliminates the need of software installation and configuration on each computer. Some mechanisms were created, such as mapping of utilization of resources by the application to improve the software distribution and startup; a virtualization middleware which give all resources needed for the software execution; an asynchronous P2P transport used to optimizing distribution on the network; and off-line support where the user can execute the application even when the server is not available or when is out of the network. © Springer-Verlag Berlin Heidelberg 2010.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Of all of the sources of renewable energies available one can argue that the most abundant and accessible are solar power, radiation, and the energy of the tides (70 % of the earth surface is covered by water). The tidal wave energy hasn’t seen a widespread distribution yet, mainly due to the lack of interest of the governments, most of the coastal areas of the world are exclusive responsibility of the governments, thus not easily open for private venture. Considering solar power, there exist two main fields of application, land based systems and space based systems. The former systems are still in a very embryonic phase, with Japan being the lead researcher in the field, with an experimental satellite-power station to be launched before 2010. Land based systems, on the other hand, are well studied, with major research and application programs in all known forms of solar power production. Given a minimum value of incident radiation, and applying the appropriate system, (i.e. power plant type), for any given area the solar power becomes an income-producing industry.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

To establish an insecticidal resistance surveillance program, Culex quinquefasciatus mosquitoes from São Paulo, Brazil, were colonized (PIN95 strain) and analyzed for levels of resistance. The PIN95 strain showed low levels of resistance to organophosphates [malathion (3.3-fold), fenitrothion (11.2-fold)] and a carbamate [propoxur (3.0-fold)]. We also observed an increase of 7.4 and 9.9 in a and b esterase activities, respectively, when compared with the reference IAL strain. An alteration in the sensitivity of acetylcholinesterase to insecticide inhibition was also found in the PIN95 mosquitoes. The resistant allele (Ace.1R), however, was found at low frequencies (0.12) and does not play an important role in the described insecticide resistance. One year later, Cx. quinquefasciatus mosquitoes were collected (PIN96 strain) at the same site and compared to the PIN95 strain. The esterase activity patterns observed for the PIN96 strain were similar to those of the PIN95 mosquitoes. However the occurrence of the Ace.1R allele was statistically higher in the PIN96 strain. The results show that esterase-based insecticide resistance was established in the PIN95 Cx. quinquefasciatus population and that an acethylcholinesterase based resistant mechanism has been selected for. A continuous monitoring of this phenomenon is fundamental for rational mosquito control and insecticide application programs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fine-grained parallel machines have the potential for very high speed computation. To program massively-concurrent MIMD machines, programmers need tools for managing complexity. These tools should not restrict program concurrency. Concurrent Aggregates (CA) provides multiple-access data abstraction tools, Aggregates, which can be used to implement abstractions with virtually unlimited potential for concurrency. Such tools allow programmers to modularize programs without reducing concurrency. I describe the design, motivation, implementation and evaluation of Concurrent Aggregates. CA has been used to construct a number of application programs. Multi-access data abstractions are found to be useful in constructing highly concurrent programs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work has as objectives the implementation of a intelligent computational tool to identify the non-technical losses and to select its most relevant features, considering information from the database with industrial consumers profiles of a power company. The solution to this problem is not trivial and not of regional character, the minimization of non-technical loss represents the guarantee of investments in product quality and maintenance of power systems, introduced by a competitive environment after the period of privatization in the national scene. This work presents using the WEKA software to the proposed objective, comparing various classification techniques and optimization through intelligent algorithms, this way, can be possible to automate applications on Smart Grids. © 2012 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Virtualization techniques have received increased attention in the field of embedded real-time systems. Such techniques provide a set of virtual machines that run on a single hardware platform, thus allowing several application programs to be executed as though they were running on separate machines, with isolated memory spaces and a fraction of the real processor time available to each of them.This papers deals with some problems that arise when implementing real-time systems written in Ada on a virtual machine. The effects of virtualization on the performance of the Ada real-time services are analysed, and requirements for the virtualization layer are derived. Virtual-machine time services are also defined in order to properly support Ada real-time applications. The implementation of the ORK+ kernel on the XtratuM supervisor is used as an example.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The intellectual property laws in the United States provide the owners of intellectual property with discretion to license the right to use that property or to make or sell products that embody the intellectual property. However, the antitrust laws constrain the use of property, including intellectual property, by a firm with market power and may place limitations on the licensing of intellectual property. This paper focuses on one aspect of antitrust law, the so-called “essential facilities doctrine,” which may impose a duty upon firms controlling an “essential facility” to make that facility available to their rivals. In the intellectual property context, an obligation to make property available is equivalent to a requirement for compulsory licensing. Compulsory licensing may embrace the requirement that the owner of software permit access to the underlying code so that others can develop compatible application programs. Compulsory licensing may undermine incentives for research and development by reducing the value of an innovation to the inventor. This paper shows that compulsory licensing also may reduce economic efficiency in the short run by facilitating the entry of inefficient producers and by promoting licensing arrangements that result in higher prices.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Corpus Linguistics is a young discipline. The earliest work was done in the 1960s, but corpora only began to be widely used by lexicographers and linguists in the late 1980s, by language teachers in the late 1990s, and by language students only very recently. This course in corpus linguistics was held at the Departamento de Linguistica Aplicada, E.T.S.I. de Minas, Universidad Politecnica de Madrid from June 15-19 1998. About 45 teachers registered for the course. 30% had PhDs in linguistics, 20% in literature, and the rest were doctorandi or qualified English teachers. The course was designed to introduce the use of corpora and other computational resources in teaching and research, with special reference to scientific and technological discourse in English. Each participant had a computer networked with the lecturer’s machine, whose display could be projected onto a large screen. Application programs were loaded onto the central server, and telnet and a web browser were available. COBUILD gave us permission to access the 323 million word Bank of English corpus, Mike Scott allowed us to use his Wordsmith Tools software, and Tim Johns gave us a copy of his MicroConcord program.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Query processing is a commonly performed procedure and a vital and integral part of information processing. It is therefore important and necessary for information processing applications to continuously improve the accessibility of data sources as well as the ability to perform queries on those data sources. ^ It is well known that the relational database model and the Structured Query Language (SQL) are currently the most popular tools to implement and query databases. However, a certain level of expertise is needed to use SQL and to access relational databases. This study presents a semantic modeling approach that enables the average user to access and query existing relational databases without the concern of the database's structure or technicalities. This method includes an algorithm to represent relational database schemas in a more semantically rich way. The result of which is a semantic view of the relational database. The user performs queries using an adapted version of SQL, namely Semantic SQL. This method substantially reduces the size and complexity of queries. Additionally, it shortens the database application development cycle and improves maintenance and reliability by reducing the size of application programs. Furthermore, a Semantic Wrapper tool illustrating the semantic wrapping method is presented. ^ I further extend the use of this semantic wrapping method to heterogeneous database management. Relational, object-oriented databases and the Internet data sources are considered to be part of the heterogeneous database environment. Semantic schemas resulting from the algorithm presented in the method were employed to describe the structure of these data sources in a uniform way. Semantic SQL was utilized to query various data sources. As a result, this method provides users with the ability to access and perform queries on heterogeneous database systems in a more innate way. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Unequaled improvements in processor and I/O speeds make many applications such as databases and operating systems to be increasingly I/O bound. Many schemes such as disk caching and disk mirroring have been proposed to address the problem. In this thesis we focus only on disk mirroring. In disk mirroring, a logical disk image is maintained on two physical disks allowing a single disk failure to be transparent to application programs. Although disk mirroring improves data availability and reliability, it has two major drawbacks. First, writes are expensive because both disks must be updated. Second, load balancing during failure mode operation is poor because all requests are serviced by the surviving disk. Distorted mirrors was proposed to address the write problem and interleaved declustering to address the load balancing problem. In this thesis we perform a comparative study of these two schemes under various operating modes. In addition we also study traditional mirroring to provide a common basis for comparison.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Despite the efforts to better manage biosolids field application programs, biosolids managers still lack of efficient and reliable tools to apply large quantities of material while avoiding odor complaints. Objectives of this research were to determine the capabilities of an electronic nose in supporting process monitoring of biosolids production and, to compare odor characteristics of biosolids produced through thermal-hydrolysis anaerobic digestion (TH-AD) to those of alkaline stabilization in the plant, under storage and in the field. A method to quantify key odorants was developed and full scale sampling and laboratory simulations were performed. The portable electronic nose (PEN3) was tested for its capabilities of distinguishing alkali dosages in the biosolids production process. Frequency of recognition of unknown samples was tested achieving highest accuracy of 81.1%. This work exposed the need for a different and more sensitive electronic nose to assure its applicability at full scale for this process. GC-MS results were consistent with those reported in literature and helped to elucidate the behavior of the pattern recognition of the PEN3. Odor characterization of TH-AD and alkaline stabilized biosolids was achieved using olfactometry measurements and GC-MS. Dilution-to-threshold of TH-AD biosolids increased under storage conditions but no correlation was found with the target compounds. The presence of furan and three methylated homologues in TH-AD biosolids was reported for the first time proposing that these compounds are produced during thermal hydrolysis process however, additional research is needed to fully describe the formation of these compounds and the increase in odors. Alkaline stabilized biosolids reported similar odor concentration but did not increase and the ‘fishy’ odor from trimethylamine emissions resulted in more offensive and unpleasant odors when compared to TH-AD. Alkaline stabilized biosolids showed a spike in sulfur and trimethylamine after 3 days of field application when the alkali addition was not sufficient to meet regulatory standards. Concentrations of target compounds from field application of TH-AD biosolids gradually decreased to below the odor threshold after 3 days. This work increased the scientific understanding on odor characteristics and behavior of two types of biosolids and on the application of electronic noses to the environmental engineering field.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Failure to detect a species in an area where it is present is a major source of error in biological surveys. We assessed whether it is possible to optimize single-visit biological monitoring surveys of highly dynamic freshwater ecosystems by framing them a priori within a particular period of time. Alternatively, we also searched for the optimal number of visits and when they should be conducted. We developed single-species occupancy models to estimate the monthly probability of detection of pond-breeding amphibians during a four-year monitoring program. Our results revealed that detection probability was species-specific and changed among sampling visits within a breeding season and also among breeding seasons. Thereby, the optimization of biological surveys with minimal survey effort (a single visit) is not feasible as it proves impossible to select a priori an adequate sampling period that remains robust across years. Alternatively, a two-survey combination at the beginning of the sampling season yielded optimal results and constituted an acceptable compromise between sampling efficacy and survey effort. Our study provides evidence of the variability and uncertainty that likely affects the efficacy of monitoring surveys, highlighting the need of repeated sampling in both ecological studies and conservation management.