873 resultados para Energy Efficient Algorithms


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Present work deals with the studies on energy requirement and convervation in selected fish harvesting systems.Modem fishing is one of the most energy intensive methods of food production. Fossil fuels used for motorised and mechanised fishing are nonrenewable and limited. Most of the environmental problems that confront mankind today are connected to the use of energy in one way or another. Code of Conduct for Responsible Fisheries (FAO, 1995) highlights the need for efficient use of energy in the fisheries sector. Information on energy requirement in different fish harvesting systems, based on the principles of energy analysis, is entirely lacking in respect of Indian fisheries. Such an analysis will provide an unbiased decision making support for maximising the yield per unit of non-renewable energy use, from different fishery resource systems, by rational deployment of harvesting systems. In the present study, results of investigations conducted during 1997-2000 on energy requirement in selected fish harvesting systems and approaches to energy conservation in fishing, are presented along with a detailed description of the fish harvesting systems and their operation. The content of the thesis is organised into 8 Chapters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

n the recent years protection of information in digital form is becoming more important. Image and video encryption has applications in various fields including Internet communications, multimedia systems, medical imaging, Tele-medicine and military communications. During storage as well as in transmission, the multimedia information is being exposed to unauthorized entities unless otherwise adequate security measures are built around the information system. There are many kinds of security threats during the transmission of vital classified information through insecure communication channels. Various encryption schemes are available today to deal with information security issues. Data encryption is widely used to protect sensitive data against the security threat in the form of “attack on confidentiality”. Secure transmission of information through insecure communication channels also requires encryption at the sending side and decryption at the receiving side. Encryption of large text message and image takes time before they can be transmitted, causing considerable delay in successive transmission of information in real-time. In order to minimize the latency, efficient encryption algorithms are needed. An encryption procedure with adequate security and high throughput is sought in multimedia encryption applications. Traditional symmetric key block ciphers like Data Encryption Standard (DES), Advanced Encryption Standard (AES) and Escrowed Encryption Standard (EES) are not efficient when the data size is large. With the availability of fast computing tools and communication networks at relatively lower costs today, these encryption standards appear to be not as fast as one would like. High throughput encryption and decryption are becoming increasingly important in the area of high-speed networking. Fast encryption algorithms are needed in these days for high-speed secure communication of multimedia data. It has been shown that public key algorithms are not a substitute for symmetric-key algorithms. Public key algorithms are slow, whereas symmetric key algorithms generally run much faster. Also, public key systems are vulnerable to chosen plaintext attack. In this research work, a fast symmetric key encryption scheme, entitled “Matrix Array Symmetric Key (MASK) encryption” based on matrix and array manipulations has been conceived and developed. Fast conversion has been achieved with the use of matrix table look-up substitution, array based transposition and circular shift operations that are performed in the algorithm. MASK encryption is a new concept in symmetric key cryptography. It employs matrix and array manipulation technique using secret information and data values. It is a block cipher operated on plain text message (or image) blocks of 128 bits using a secret key of size 128 bits producing cipher text message (or cipher image) blocks of the same size. This cipher has two advantages over traditional ciphers. First, the encryption and decryption procedures are much simpler, and consequently, much faster. Second, the key avalanche effect produced in the ciphertext output is better than that of AES.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While channel coding is a standard method of improving a system’s energy efficiency in digital communications, its practice does not extend to high-speed links. Increasing demands in network speeds are placing a large burden on the energy efficiency of high-speed links and render the benefit of channel coding for these systems a timely subject. The low error rates of interest and the presence of residual intersymbol interference (ISI) caused by hardware constraints impede the analysis and simulation of coded high-speed links. Focusing on the residual ISI and combined noise as the dominant error mechanisms, this paper analyses error correlation through concepts of error region, channel signature, and correlation distance. This framework provides a deeper insight into joint error behaviours in high-speed links, extends the range of statistical simulation for coded high-speed links, and provides a case against the use of biased Monte Carlo methods in this setting

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cement industry ranks 2nd in energy consumption among the industries in India. It is one of the major emitter of CO2, due to combustion of fossil fuel and calcination process. As the huge amount of CO2 emissions cause severe environment problems, the efficient and effective utilization of energy is a major concern in Indian cement industry. The main objective of the research work is to assess the energy cosumption and energy conservation of the Indian cement industry and to predict future trends in cement production and reduction of CO2 emissions. In order to achieve this objective, a detailed energy and exergy analysis of a typical cement plant in Kerala was carried out. The data on fuel usage, electricity consumption, amount of clinker and cement production were also collected from a few selected cement industries in India for the period 2001 - 2010 and the CO2 emissions were estimated. A complete decomposition method was used for the analysis of change in CO2 emissions during the period 2001 - 2010 by categorising the cement industries according to the specific thermal energy consumption. A basic forecasting model for the cement production trend was developed by using the system dynamic approach and the model was validated with the data collected from the selected cement industries. The cement production and CO2 emissions from the industries were also predicted with the base year as 2010. The sensitivity analysis of the forecasting model was conducted and found satisfactory. The model was then modified for the total cement production in India to predict the cement production and CO2 emissions for the next 21 years under three different scenarios. The parmeters that influence CO2 emissions like population and GDP growth rate, demand of cement and its production, clinker consumption and energy utilization are incorporated in these scenarios. The existing growth rate of the population and cement production in the year 2010 were used in the baseline scenario. In the scenario-1 (S1) the growth rate of population was assumed to be gradually decreasing and finally reach zero by the year 2030, while in scenario-2 (S2) a faster decline in the growth rate was assumed such that zero growth rate is achieved in the year 2020. The mitigation strategiesfor the reduction of CO2 emissions from the cement production were identified and analyzed in the energy management scenarioThe energy and exergy analysis of the raw mill of the cement plant revealed that the exergy utilization was worse than energy utilization. The energy analysis of the kiln system showed that around 38% of heat energy is wasted through exhaust gases of the preheater and cooler of the kiln sysetm. This could be recovered by the waste heat recovery system. A secondary insulation shell was also recommended for the kiln in the plant in order to prevent heat loss and enhance the efficiency of the plant. The decomposition analysis of the change in CO2 emissions during 2001- 2010 showed that the activity effect was the main factor for CO2 emissions for the cement industries since it is directly dependent on economic growth of the country. The forecasting model showed that 15.22% and 29.44% of CO2 emissions reduction can be achieved by the year 2030 in scenario- (S1) and scenario-2 (S2) respectively. In analysing the energy management scenario, it was assumed that 25% of electrical energy supply to the cement plants is replaced by renewable energy. The analysis revealed that the recovery of waste heat and the use of renewable energy could lead to decline in CO2 emissions 7.1% for baseline scenario, 10.9 % in scenario-1 (S1) and 11.16% in scenario-2 (S2) in 2030. The combined scenario considering population stabilization by the year 2020, 25% of contribution from renewable energy sources of the cement industry and 38% thermal energy from the waste heat streams shows that CO2 emissions from Indian cement industry could be reduced by nearly 37% in the year 2030. This would reduce a substantial level of greenhouse gas load to the environment. The cement industry will remain one of the critical sectors for India to meet its CO2 emissions reduction target. India’s cement production will continue to grow in the near future due to its GDP growth. The control of population, improvement in plant efficiency and use of renewable energy are the important options for the mitigation of CO2 emissions from Indian cement industries

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Presently different audio watermarking methods are available; most of them inclined towards copyright protection and copy protection. This is the key motive for the notion to develop a speaker verification scheme that guar- antees non-repudiation services and the thesis is its outcome. The research presented in this thesis scrutinizes the field of audio water- marking and the outcome is a speaker verification scheme that is proficient in addressing issues allied to non-repudiation to a great extent. This work aimed in developing novel audio watermarking schemes utilizing the fun- damental ideas of Fast-Fourier Transform (FFT) or Fast Walsh-Hadamard Transform (FWHT). The Mel-Frequency Cepstral Coefficients (MFCC) the best parametric representation of the acoustic signals along with few other key acoustic characteristics is employed in crafting of new schemes. The au- dio watermark created is entirely dependent to the acoustic features, hence named as FeatureMark and is crucial in this work. In any watermarking scheme, the quality of the extracted watermark de- pends exclusively on the pre-processing action and in this work framing and windowing techniques are involved. The theme non-repudiation provides immense significance in the audio watermarking schemes proposed in this work. Modification of the signal spectrum is achieved in a variety of ways by selecting appropriate FFT/FWHT coefficients and the watermarking schemes were evaluated for imperceptibility, robustness and capacity char- acteristics. The proposed schemes are unequivocally effective in terms of maintaining the sound quality, retrieving the embedded FeatureMark and in terms of the capacity to hold the mark bits. Robust nature of these marking schemes is achieved with the help of syn- chronization codes such as Barker Code with FFT based FeatureMarking scheme and Walsh Code with FWHT based FeatureMarking scheme. An- other important feature associated with this scheme is the employment of an encryption scheme towards the preparation of its FeatureMark that scrambles the signal features that helps to keep the signal features unreve- laed. A comparative study with the existing watermarking schemes and the ex- periments to evaluate imperceptibility, robustness and capacity tests guar- antee that the proposed schemes can be baselined as efficient audio water- marking schemes. The four new digital audio watermarking algorithms in terms of their performance are remarkable thereby opening more opportu- nities for further research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

From the early stages of the twentieth century, polyaniline (PANI), a well-known and extensively studied conducting polymer has captured the attention of scientific community owing to its interesting electrical and optical properties. Starting from its structural properties, to the currently pursued optical, electrical and electrochemical properties, extensive investigations on pure PANI and its composites are still much relevant to explore its potentialities to the maximum extent. The synthesis of highly crystalline PANI films with ordered structure and high electrical conductivity has not been pursued in depth yet. Recently, nanostructured PANI and the nanocomposites of PANI have attracted a great deal of research attention owing to the possibilities of applications in optical switching devices, optoelectronics and energy storage devices. The work presented in the thesis is centered around the realization of highly conducting and structurally ordered PANI and its composites for applications mainly in the areas of nonlinear optics and electrochemical energy storage. Out of the vast variety of application fields of PANI, these two areas are specifically selected for the present studies, because of the following observations. The non-linear optical properties and the energy storing properties of PANI depend quite sensitively on the extent of conjugation of the polymer structure, the type and concentration of the dopants added and the type and size of the nano particles selected for making the nanocomposites. The first phase of the work is devoted to the synthesis of highly ordered and conducting films of PANI doped with various dopants and the structural, morphological and electrical characterization followed by the synthesis of metal nanoparticles incorporated PANI samples and the detailed optical characterization in the linear and nonlinear regimes. The second phase of the work comprises the investigations on the prospects of PANI in realizing polymer based rechargeable lithium ion cells with the inherent structural flexibility of polymer systems and environmental safety and stability. Secondary battery systems have become an inevitable part of daily life. They can be found in most of the portable electronic gadgets and recently they have started powering automobiles, although the power generated is low. The efficient storage of electrical energy generated from solar cells is achieved by using suitable secondary battery systems. The development of rechargeable battery systems having excellent charge storage capacity, cyclability, environmental friendliness and flexibility has yet to be realized in practice. Rechargeable Li-ion cells employing cathode active materials like LiCoO2, LiMn2O4, LiFePO4 have got remarkable charge storage capacity with least charge leakage when not in use. However, material toxicity, chance of cell explosion and lack of effective cell recycling mechanism pose significant risk factors which are to be addressed seriously. These cells also lack flexibility in their design due to the structural characteristics of the electrode materials. Global research is directed towards identifying new class of electrode materials with less risk factors and better structural stability and flexibility. Polymer based electrode materials with inherent flexibility, stability and eco-friendliness can be a suitable choice. One of the prime drawbacks of polymer based cathode materials is the low electronic conductivity. Hence the real task with this class of materials is to get better electronic conductivity with good electrical storage capability. Electronic conductivity can be enhanced by using proper dopants. In the designing of rechargeable Li-ion cells with polymer based cathode active materials, the key issue is to identify the optimum lithiation of the polymer cathode which can ensure the highest electronic conductivity and specific charge capacity possible The development of conducting polymer based rechargeable Li-ion cells with high specific capacity and excellent cycling characteristics is a highly competitive area among research and development groups, worldwide. Polymer based rechargeable batteries are specifically attractive due to the environmentally benign nature and the possible constructional flexibility they offer. Among polymers having electrical transport properties suitable for rechargeable battery applications, polyaniline is the most favoured one due to its tunable electrical conducting properties and the availability of cost effective precursor materials for its synthesis. The performance of a battery depends significantly on the characteristics of its integral parts, the cathode, anode and the electrolyte, which in turn depend on the materials used. Many research groups are involved in developing new electrode and electrolyte materials to enhance the overall performance efficiency of the battery. Currently explored electrolytes for Li ion battery applications are in liquid or gel form, which makes well-defined sealing essential. The use of solid electrolytes eliminates the need for containment of liquid electrolytes, which will certainly simplify the cell design and improve the safety and durability. The other advantages of polymer electrolytes include dimensional stability, safety and the ability to prevent lithium dendrite formation. One of the ultimate aims of the present work is to realize all solid state, flexible and environment friendly Li-ion cells with high specific capacity and excellent cycling stability. Part of the present work is hence focused on identifying good polymer based solid electrolytes essential for realizing all solid state polymer based Li ion cells.The present work is an attempt to study the versatile roles of polyaniline in two different fields of technological applications like nonlinear optics and energy storage. Conducting form of doped PANI films with good extent of crystallinity have been realized using a level surface assisted casting method in addition to the generally employed technique of spin coating. Metal nanoparticles embedded PANI offers a rich source for nonlinear optical studies and hence gold and silver nanoparticles have been used for making the nanocomposites in bulk and thin film forms. These PANI nanocomposites are found to exhibit quite dominant third order optical non-linearity. The highlight of these studies is the observation of the interesting phenomenon of the switching between saturable absorption (SA) and reverse saturable absorption (RSA) in the films of Ag/PANI and Au/PANI nanocomposites, which offers prospects of applications in optical switching. The investigations on the energy storage prospects of PANI were carried out on Li enriched PANI which was used as the cathode active material for assembling rechargeable Li-ion cells. For Li enrichment or Li doping of PANI, n-Butyllithium (n-BuLi) in hexanes was used. The Li doping as well as the Li-ion cell assembling were carried out in an argon filled glove box. Coin cells were assembled with Li doped PANI with different doping concentrations, as the cathode, LiPF6 as the electrolyte and Li metal as the anode. These coin cells are found to show reasonably good specific capacity around 22mAh/g and excellent cycling stability and coulombic efficiency around 99%. To improve the specific capacity, composites of Li doped PANI with inorganic cathode active materials like LiFePO4 and LiMn2O4 were synthesized and coin cells were assembled as mentioned earlier to assess the electrochemical capability. The cells assembled using the composite cathodes are found to show significant enhancement in specific capacity to around 40mAh/g. One of the other interesting observations is the complete blocking of the adverse effects of Jahn-Teller distortion, when the composite cathode, PANI-LiMn2O4 is used for assembling the Li-ion cells. This distortion is generally observed, near room temperature, when LiMn2O4 is used as the cathode, which significantly reduces the cycling stability of the cells.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since no physical system can ever be completely isolated from its environment, the study of open quantum systems is pivotal to reliably and accurately control complex quantum systems. In practice, reliability of the control field needs to be confirmed via certification of the target evolution while accuracy requires the derivation of high-fidelity control schemes in the presence of decoherence. In the first part of this thesis an algebraic framework is presented that allows to determine the minimal requirements on the unique characterisation of arbitrary unitary gates in open quantum systems, independent on the particular physical implementation of the employed quantum device. To this end, a set of theorems is devised that can be used to assess whether a given set of input states on a quantum channel is sufficient to judge whether a desired unitary gate is realised. This allows to determine the minimal input for such a task, which proves to be, quite remarkably, independent of system size. These results allow to elucidate the fundamental limits regarding certification and tomography of open quantum systems. The combination of these insights with state-of-the-art Monte Carlo process certification techniques permits a significant improvement of the scaling when certifying arbitrary unitary gates. This improvement is not only restricted to quantum information devices where the basic information carrier is the qubit but it also extends to systems where the fundamental informational entities can be of arbitary dimensionality, the so-called qudits. The second part of this thesis concerns the impact of these findings from the point of view of Optimal Control Theory (OCT). OCT for quantum systems utilises concepts from engineering such as feedback and optimisation to engineer constructive and destructive interferences in order to steer a physical process in a desired direction. It turns out that the aforementioned mathematical findings allow to deduce novel optimisation functionals that significantly reduce not only the required memory for numerical control algorithms but also the total CPU time required to obtain a certain fidelity for the optimised process. The thesis concludes by discussing two problems of fundamental interest in quantum information processing from the point of view of optimal control - the preparation of pure states and the implementation of unitary gates in open quantum systems. For both cases specific physical examples are considered: for the former the vibrational cooling of molecules via optical pumping and for the latter a superconducting phase qudit implementation. In particular, it is illustrated how features of the environment can be exploited to reach the desired targets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We compare a broad range of optimal product line design methods. The comparisons take advantage of recent advances that make it possible to identify the optimal solution to problems that are too large for complete enumeration. Several of the methods perform surprisingly well, including Simulated Annealing, Product-Swapping and Genetic Algorithms. The Product-Swapping heuristic is remarkable for its simplicity. The performance of this heuristic suggests that the optimal product line design problem may be far easier to solve in practice than indicated by complexity theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Selected configuration interaction (SCI) for atomic and molecular electronic structure calculations is reformulated in a general framework encompassing all CI methods. The linked cluster expansion is used as an intermediate device to approximate CI coefficients BK of disconnected configurations (those that can be expressed as products of combinations of singly and doubly excited ones) in terms of CI coefficients of lower-excited configurations where each K is a linear combination of configuration-state-functions (CSFs) over all degenerate elements of K. Disconnected configurations up to sextuply excited ones are selected by Brown's energy formula, ΔEK=(E-HKK)BK2/(1-BK2), with BK determined from coefficients of singly and doubly excited configurations. The truncation energy error from disconnected configurations, Δdis, is approximated by the sum of ΔEKS of all discarded Ks. The remaining (connected) configurations are selected by thresholds based on natural orbital concepts. Given a model CI space M, a usual upper bound ES is computed by CI in a selected space S, and EM=E S+ΔEdis+δE, where δE is a residual error which can be calculated by well-defined sensitivity analyses. An SCI calculation on Ne ground state featuring 1077 orbitals is presented. Convergence to within near spectroscopic accuracy (0.5 cm-1) is achieved in a model space M of 1.4× 109 CSFs (1.1 × 1012 determinants) containing up to quadruply excited CSFs. Accurate energy contributions of quintuples and sextuples in a model space of 6.5 × 1012 CSFs are obtained. The impact of SCI on various orbital methods is discussed. Since ΔEdis can readily be calculated for very large basis sets without the need of a CI calculation, it can be used to estimate the orbital basis incompleteness error. A method for precise and efficient evaluation of ES is taken up in a companion paper

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An implicitly parallel method for integral-block driven restricted active space self-consistent field (RASSCF) algorithms is presented. The approach is based on a model space representation of the RAS active orbitals with an efficient expansion of the model subspaces. The applicability of the method is demonstrated with a RASSCF investigation of the first two excited states of indole

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This CEPS Task Force Report focuses on how to improve water efficiency in Europe, notably in public supply, households, agriculture, energy and manufacturing as well as across sectors. It presents a number of recommendations on how to make better use of economic policy instruments to sustainably manage the EU’s water resources. Published in the run-up to the European Commission’s “Blueprint to Safeguard Europe’s Waters”, the report contributes to the policy deliberations in two ways. First, by assessing the viability of economic policy instruments, it addresses a major shortcoming that has so far prevented the 2000 EU Water Framework Directive (WFD) from becoming fully effective in practice: the lack of appropriate, coherent and effective instruments in (some) member states. Second, as the Task Force report is the result of an interactive process involving a variety of stakeholders, it is able to point to the key differences in interpreting and applying WFD principles that have led to a lack of policy coherence across the EU and to offer some pragmatic advice on moving forward.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the model SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes), which is a vertical (1-D) integrated radiative transfer and energy balance model. The model links visible to thermal infrared radiance spectra (0.4 to 50 μm) as observed above the canopy to the fluxes of water, heat and carbon dioxide, as a function of vegetation structure, and the vertical profiles of temperature. Output of the model is the spectrum of outgoing radiation in the viewing direction and the turbulent heat fluxes, photosynthesis and chlorophyll fluorescence. A special routine is dedicated to the calculation of photosynthesis rate and chlorophyll fluorescence at the leaf level as a function of net radiation and leaf temperature. The fluorescence contributions from individual leaves are integrated over the canopy layer to calculate top-of-canopy fluorescence. The calculation of radiative transfer and the energy balance is fully integrated, allowing for feedback between leaf temperatures, leaf chlorophyll fluorescence and radiative fluxes. Leaf temperatures are calculated on the basis of energy balance closure. Model simulations were evaluated against observations reported in the literature and against data collected during field campaigns. These evaluations showed that SCOPE is able to reproduce realistic radiance spectra, directional radiance and energy balance fluxes. The model may be applied for the design of algorithms for the retrieval of evapotranspiration from optical and thermal earth observation data, for validation of existing methods to monitor vegetation functioning, to help interpret canopy fluorescence measurements, and to study the relationships between synoptic observations with diurnally integrated quantities. The model has been implemented in Matlab and has a modular design, thus allowing for great flexibility and scalability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An efficient method is described for the approximate calculation of the intensity of multiply scattered lidar returns. It divides the outgoing photons into three populations, representing those that have experienced zero, one, and more than one forward-scattering event. Each population is parameterized at each range gate by its total energy, its spatial variance, the variance of photon direction, and the covariance, of photon direction and position. The result is that for an N-point profile the calculation is O(N-2) efficient and implicitly includes up to N-order scattering, making it ideal for use in iterative retrieval algorithms for which speed is crucial. In contrast, models that explicitly consider each scattering order separately are at best O(N-m/m!) efficient for m-order scattering and often cannot be performed to more than the third or fourth order in retrieval algorithms. For typical cloud profiles and a wide range of lidar fields of view, the new algorithm is as accurate as an explicit calculation truncated at the fifth or sixth order but faster by several orders of magnitude. (C) 2006 Optical Society of America.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Frequent pattern discovery in structured data is receiving an increasing attention in many application areas of sciences. However, the computational complexity and the large amount of data to be explored often make the sequential algorithms unsuitable. In this context high performance distributed computing becomes a very interesting and promising approach. In this paper we present a parallel formulation of the frequent subgraph mining problem to discover interesting patterns in molecular compounds. The application is characterized by a highly irregular tree-structured computation. No estimation is available for task workloads, which show a power-law distribution in a wide range. The proposed approach allows dynamic resource aggregation and provides fault and latency tolerance. These features make the distributed application suitable for multi-domain heterogeneous environments, such as computational Grids. The distributed application has been evaluated on the well known National Cancer Institute’s HIV-screening dataset.