971 resultados para Efficient dominating set


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this work was to evaluate the contribution of efficient nitrogen-fixing rhizobial strains to grain yield of new cowpea cultivars, indicated for cultivation in the Brazilian Semiarid region, in the sub-medium of the São Francisco River Valley. Two experiments were set up at the irrigated perimeters of Mandacaru (Juazeiro, state of Bahia) and Bebedouro (Petrolina, state of Pernambuco). The treatments consisted of single inoculation of five rhizobial strains - BR 3267, BR 3262, INPA 03-11B, UFLA 03-84 (Bradyrhizobiumsp.), and BR 3299T(Microvirga vignae) -, besides a treatment with nitrogen and a control without inoculation or N application. The following cowpea cultivars were evaluated: BRS Pujante, BRS Tapaihum, BRS Carijó, and BRS Acauã. A randomized complete block design, with four replicates, was used. Inoculated plants showed similar grain yield to the one observed with plants fertilized with 80 kg ha-1 N. The cultivars BRS Tapaihum and BRS Pujante stood out in grain yield and protein contents when inoculated, showing their potential for cultivation in the sub-medium of the São Francisco River Valley.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although fetal anatomy can be adequately viewed in new multi-slice MR images, many critical limitations remain for quantitative data analysis. To this end, several research groups have recently developed advanced image processing methods, often denoted by super-resolution (SR) techniques, to reconstruct from a set of clinical low-resolution (LR) images, a high-resolution (HR) motion-free volume. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has been quite attracted by Total Variation energies because of their ability in edge preserving but only standard explicit steepest gradient techniques have been applied for optimization. In a preliminary work, it has been shown that novel fast convex optimization techniques could be successfully applied to design an efficient Total Variation optimization algorithm for the super-resolution problem. In this work, two major contributions are presented. Firstly, we will briefly review the Bayesian and Variational dual formulations of current state-of-the-art methods dedicated to fetal MRI reconstruction. Secondly, we present an extensive quantitative evaluation of our SR algorithm previously introduced on both simulated fetal and real clinical data (with both normal and pathological subjects). Specifically, we study the robustness of regularization terms in front of residual registration errors and we also present a novel strategy for automatically select the weight of the regularization as regards the data fidelity term. Our results show that our TV implementation is highly robust in front of motion artifacts and that it offers the best trade-off between speed and accuracy for fetal MRI recovery as in comparison with state-of-the art methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fetal MRI reconstruction aims at finding a high-resolution image given a small set of low-resolution images. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has considered several regularization terms s.a. Dirichlet/Laplacian energy [1], Total Variation (TV)based energies [2,3] and more recently non-local means [4]. Although TV energies are quite attractive because of their ability in edge preservation, standard explicit steepest gradient techniques have been applied to optimize fetal-based TV energies. The main contribution of this work lies in the introduction of a well-posed TV algorithm from the point of view of convex optimization. Specifically, our proposed TV optimization algorithm for fetal reconstruction is optimal w.r.t. the asymptotic and iterative convergence speeds O(1/n(2)) and O(1/root epsilon), while existing techniques are in O(1/n) and O(1/epsilon). We apply our algorithm to (1) clinical newborn data, considered as ground truth, and (2) clinical fetal acquisitions. Our algorithm compares favorably with the literature in terms of speed and accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with a hardware accelerated Java virtual machine, named REALJava. The REALJava virtual machine is targeted for resource constrained embedded systems. The goal is to attain increased computational performance with reduced power consumption. While these objectives are often seen as trade-offs, in this context both of them can be attained simultaneously by using dedicated hardware. The target level of the computational performance of the REALJava virtual machine is initially set to be as fast as the currently available full custom ASIC Java processors. As a secondary goal all of the components of the virtual machine are designed so that the resulting system can be scaled to support multiple co-processor cores. The virtual machine is designed using the hardware/software co-design paradigm. The partitioning between the two domains is flexible, allowing customizations to the resulting system, for instance the floating point support can be omitted from the hardware in order to decrease the size of the co-processor core. The communication between the hardware and the software domains is encapsulated into modules. This allows the REALJava virtual machine to be easily integrated into any system, simply by redesigning the communication modules. Besides the virtual machine and the related co-processor architecture, several performance enhancing techniques are presented. These include techniques related to instruction folding, stack handling, method invocation, constant loading and control in time domain. The REALJava virtual machine is prototyped using three different FPGA platforms. The original pipeline structure is modified to suit the FPGA environment. The performance of the resulting Java virtual machine is evaluated against existing Java solutions in the embedded systems field. The results show that the goals are attained, both in terms of computational performance and power consumption. Especially the computational performance is evaluated thoroughly, and the results show that the REALJava is more than twice as fast as the fastest full custom ASIC Java processor. In addition to standard Java virtual machine benchmarks, several new Java applications are designed to both verify the results and broaden the spectrum of the tests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Materials based on tungstophosphoric acid (TPA) immobilized on NH4ZSM5 zeolite were prepared by wet impregnation of the zeolite matrix with TPA aqueous solutions. Their concentration was varied in order to obtain TPA contents of 5%, 10%, 20%, and 30% w/w in the solid. The materials were characterized by N2 adsorption-desorption isotherms, XRD, FT-IR, 31P MAS-NMR, TGA-DSC, DRS-UV-Vis, and the acidic behavior was studied by potentiometric titration with n-butylamine. The BET surface area (SBET) decreased when the TPA content was raised as a result of zeolite pore blocking. The X-ray diffraction patterns of the solids modified with TPA only presented the characteristic peaks of NH4ZSM5 zeolites, and an additional set of peaks assigned to the presence of (NH4)3PW12O40. According to the Fourier transform infrared and 31P magic angle spinning-nuclear magnetic resonance spectra, the main species present in the samples was the [PW12O40]3- anion, which was partially transformed into the [P2W21O71]6- anion during the synthesis and drying steps. The thermal stability of the NH4ZSM5TPA materials was similar to that of their parent zeolites. Moreover, the samples with the highest TPA content exhibited band gap energy values similar to those reported for TiO2. The immobilization of TPA on NH4ZSM5 zeolite allowed the obtention of catalysts with high photocatalytic activity in the degradation of methyl orange dye (MO) in water, at 25 ºC. These can be reused at least three times without any significant decrease in degree of degradation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT - (Phenology, fruit set and dispersal of Cordia multispicata Cham., an important weed shrub of abandoned pastures in eastern Amazonia). The reproductive ecology of the distylous tropical shrub Cordia multispicata was studied in an abandoned pasture in Paragominas County , Pará state, Brazil. It is a common species in the Amazon basin where it occurs as a weed in open and disturbed habitats. C. multispicata has many flowers per inflorescence (85 ± 12) but 84% abort before fertilization. Flowering occurs throughout the year. Fruits are small, with a red fleshy pericarp (skin-pulp) attractive to birds. Fruit set is lower during the dry season (less than 30%) and higher during the rainy season when there are many visits of insects to the flowers. Fruiting has a peak between the end of the dry season and the middle of the rainy season. Nineteen bird species were observed foraging for the fruits of C. multispicata, and 79% of those species can be considered as potential dispersal agents. The efficient seed dispersal and aggregated spatial distribution associated with some characteristics of the dispersors greatly contributed to the success of this species in abandoned pastures of eastern Amazonia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis was to create a process for all multi-site ramp-up (MSRU) projects in the case company in order to have simultaneous ramp-ups early in the market. The research was done through case study in one company and semi-structured interviews. There are already processes, which are now in use in MSRU-cases. Interviews of 20 ramp-up specialists revealed topics to be improved. Those were project team set up, roles and responsibilities and recommended project organization, communication, product change management practices, competence and know how transfer practices and support model. More R&D support and involvement is needed in MSRU-projects. DCM’s role is very important in the MSRU-projects among PMT-team; he should be the business owner of the project. Recommendation is that product programs could take care of the product and repair training of new products in volume factories. R&D’s participation in competence transfers is essential important in MSRU-projects. Communication in projects could be shared through special intranet commune. Blogging and tweeting could be considered in the communication plan. If hundreds of change notes are open in ramp-up phase, it should be considered not to approve the product into volume ramp-up. PMTs’ supports are also important and MSRU-projects should be planned, budgeted and executed together. Finally a new MSRU-process is presented in this thesis to be used in all MSRU-projects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing performance of computers has made it possible to solve algorithmically problems for which manual and possibly inaccurate methods have been previously used. Nevertheless, one must still pay attention to the performance of an algorithm if huge datasets are used or if the problem iscomputationally difficult. Two geographic problems are studied in the articles included in this thesis. In the first problem the goal is to determine distances from points, called study points, to shorelines in predefined directions. Together with other in-formation, mainly related to wind, these distances can be used to estimate wave exposure at different areas. In the second problem the input consists of a set of sites where water quality observations have been made and of the results of the measurements at the different sites. The goal is to select a subset of the observational sites in such a manner that water quality is still measured in a sufficient accuracy when monitoring at the other sites is stopped to reduce economic cost. Most of the thesis concentrates on the first problem, known as the fetch length problem. The main challenge is that the two-dimensional map is represented as a set of polygons with millions of vertices in total and the distances may also be computed for millions of study points in several directions. Efficient algorithms are developed for the problem, one of them approximate and the others exact except for rounding errors. The solutions also differ in that three of them are targeted for serial operation or for a small number of CPU cores whereas one, together with its further developments, is suitable also for parallel machines such as GPUs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyze collective choice procedures with respect to their rationalizability by means of profiles of individual preference orderings. A selection function is a generalization of a choice function where selected alternatives may depend on a reference (or status quo) alternative in addition to the set of feasible options. Given the number of agents n, a selection function satisfies efficient and non-deteriorating n-rationalizability if there exists a profile of n orderings on the universal set of alternatives such that the selected alternatives are (i) efficient for that profile, and (ii) at least as good as the reference option according to each individual preference. We analyze efficient and non-deteriorating collective choice in a general abstract framework and provide a characterization result given a universal set domain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a linear production model, we characterize the class of efficient and strategy-proof allocation functions, and the class of efficient and coalition strategy-proof allocation functions. In the former class, requiring equal treatment of equals allows us to identify a unique allocation function. This function is also the unique member of the latter class which satisfies uniform treatment of uniforms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the assignment of indivisible objects with quotas (houses, jobs, or offices) to a set of agents (students, job applicants, or professors). Each agent receives at most one object and monetary compensations are not possible. We characterize efficient priority rules by efficiency, strategy-proofness, and reallocation-consistency. Such a rule respects an acyclical priority structure and the allocations can be determined using the deferred acceptance algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The attached file is created with Scientific Workplace Latex

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The theme of the thesis is centred around one important aspect of wireless sensor networks; the energy-efficiency.The limited energy source of the sensor nodes calls for design of energy-efficient routing protocols. The schemes for protocol design should try to minimize the number of communications among the nodes to save energy. Cluster based techniques were found energy-efficient. In this method clusters are formed and data from different nodes are collected under a cluster head belonging to each clusters and then forwarded it to the base station.Appropriate cluster head selection process and generation of desirable distribution of the clusters can reduce energy consumption of the network and prolong the network lifetime. In this work two such schemes were developed for static wireless sensor networks.In the first scheme, the energy wastage due to cluster rebuilding incorporating all the nodes were addressed. A tree based scheme is presented to alleviate this problem by rebuilding only sub clusters of the network. An analytical model of energy consumption of proposed scheme is developed and the scheme is compared with existing cluster based scheme. The simulation study proved the energy savings observed.The second scheme concentrated to build load-balanced energy efficient clusters to prolong the lifetime of the network. A voting based approach to utilise the neighbor node information in the cluster head selection process is proposed. The number of nodes joining a cluster is restricted to have equal sized optimum clusters. Multi-hop communication among the cluster heads is also introduced to reduce the energy consumption. The simulation study has shown that the scheme results in balanced clusters and the network achieves reduction in energy consumption.The main conclusion from the study was the routing scheme should pay attention on successful data delivery from node to base station in addition to the energy-efficiency. The cluster based protocols are extended from static scenario to mobile scenario by various authors. None of the proposals addresses cluster head election appropriately in view of mobility. An elegant scheme for electing cluster heads is presented to meet the challenge of handling cluster durability when all the nodes in the network are moving. The scheme has been simulated and compared with a similar approach.The proliferation of sensor networks enables users with large set of sensor information to utilise them in various applications. The sensor network programming is inherently difficult due to various reasons. There must be an elegant way to collect the data gathered by sensor networks with out worrying about the underlying structure of the network. The final work presented addresses a way to collect data from a sensor network and present it to the users in a flexible way.A service oriented architecture based application is built and data collection task is presented as a web service. This will enable composition of sensor data from different sensor networks to build interesting applications. The main objective of the thesis was to design energy-efficient routing schemes for both static as well as mobile sensor networks. A progressive approach was followed to achieve this goal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since no physical system can ever be completely isolated from its environment, the study of open quantum systems is pivotal to reliably and accurately control complex quantum systems. In practice, reliability of the control field needs to be confirmed via certification of the target evolution while accuracy requires the derivation of high-fidelity control schemes in the presence of decoherence. In the first part of this thesis an algebraic framework is presented that allows to determine the minimal requirements on the unique characterisation of arbitrary unitary gates in open quantum systems, independent on the particular physical implementation of the employed quantum device. To this end, a set of theorems is devised that can be used to assess whether a given set of input states on a quantum channel is sufficient to judge whether a desired unitary gate is realised. This allows to determine the minimal input for such a task, which proves to be, quite remarkably, independent of system size. These results allow to elucidate the fundamental limits regarding certification and tomography of open quantum systems. The combination of these insights with state-of-the-art Monte Carlo process certification techniques permits a significant improvement of the scaling when certifying arbitrary unitary gates. This improvement is not only restricted to quantum information devices where the basic information carrier is the qubit but it also extends to systems where the fundamental informational entities can be of arbitary dimensionality, the so-called qudits. The second part of this thesis concerns the impact of these findings from the point of view of Optimal Control Theory (OCT). OCT for quantum systems utilises concepts from engineering such as feedback and optimisation to engineer constructive and destructive interferences in order to steer a physical process in a desired direction. It turns out that the aforementioned mathematical findings allow to deduce novel optimisation functionals that significantly reduce not only the required memory for numerical control algorithms but also the total CPU time required to obtain a certain fidelity for the optimised process. The thesis concludes by discussing two problems of fundamental interest in quantum information processing from the point of view of optimal control - the preparation of pure states and the implementation of unitary gates in open quantum systems. For both cases specific physical examples are considered: for the former the vibrational cooling of molecules via optical pumping and for the latter a superconducting phase qudit implementation. In particular, it is illustrated how features of the environment can be exploited to reach the desired targets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large scale image mosaicing methods are in great demand among scientists who study different aspects of the seabed, and have been fostered by impressive advances in the capabilities of underwater robots in gathering optical data from the seafloor. Cost and weight constraints mean that lowcost Remotely operated vehicles (ROVs) usually have a very limited number of sensors. When a low-cost robot carries out a seafloor survey using a down-looking camera, it usually follows a predetermined trajectory that provides several non time-consecutive overlapping image pairs. Finding these pairs (a process known as topology estimation) is indispensable to obtaining globally consistent mosaics and accurate trajectory estimates, which are necessary for a global view of the surveyed area, especially when optical sensors are the only data source. This thesis presents a set of consistent methods aimed at creating large area image mosaics from optical data obtained during surveys with low-cost underwater vehicles. First, a global alignment method developed within a Feature-based image mosaicing (FIM) framework, where nonlinear minimisation is substituted by two linear steps, is discussed. Then, a simple four-point mosaic rectifying method is proposed to reduce distortions that might occur due to lens distortions, error accumulation and the difficulties of optical imaging in an underwater medium. The topology estimation problem is addressed by means of an augmented state and extended Kalman filter combined framework, aimed at minimising the total number of matching attempts and simultaneously obtaining the best possible trajectory. Potential image pairs are predicted by taking into account the uncertainty in the trajectory. The contribution of matching an image pair is investigated using information theory principles. Lastly, a different solution to the topology estimation problem is proposed in a bundle adjustment framework. Innovative aspects include the use of fast image similarity criterion combined with a Minimum spanning tree (MST) solution, to obtain a tentative topology. This topology is improved by attempting image matching with the pairs for which there is the most overlap evidence. Unlike previous approaches for large-area mosaicing, our framework is able to deal naturally with cases where time-consecutive images cannot be matched successfully, such as completely unordered sets. Finally, the efficiency of the proposed methods is discussed and a comparison made with other state-of-the-art approaches, using a series of challenging datasets in underwater scenarios