971 resultados para easy
Resumo:
Fish cage culture is a rapid aquacultural practice of producing fish with more yield compared to traditional pond culture. Several species cultured by this method include Cyprinus carpio, Orechromis niloticus, Sarotherodon galilaeus, Tilapia zilli, Clarias lazera, C. gariepinus, Heterobranchus bidorsalis, Citharinus citharus, Distochodus rostratus and Alestes dentes. However, the culture of fish in cages has some problems that are due to mechanical defects of the cage or diseases due to infection. The mechanical problems which may lead to clogged net, toxicity and easy access by predators depend on defects associated with various types of nets which include fold sieve cloth net, wire net, polypropylene net, nylon, galvanized and welded net. The diseases problems are of two types namely introduced diseases due to parasites. The introduced parasites include Crustaseans, Ergasilus sp. Argulus africana, and Lamprolegna sp, Helminth, Diplostomulum tregnna: Protozoan, Trichodina sp, Myxosoma sp, Myxobolus sp. the second disease problems are inherent diseases aggravated by the very rich nutrient environment in cages for rapid bacterial, saprophytic fungi, and phytoplanktonic bloom resulting in clogging of net, stagnation of water and low biological oxygen demand (BOD). The consequence is fish kill, prevalence of gill rot and dropsy conditions. Recommendations on routine cage hygiene, diagnosis and control procedures to reduce fish mortality are highlighted
Resumo:
We present a prototype that implements a set of logical rules to prove the satis ability for a class of speci cations on XML documents. Speci cations are given by means of constraints built on Boolean XPath patterns. The main goal of this tool is to test if a given speci cation is satis able or not, showing the history of the execution. It can also be used to test if a given document is a model of a given speci cation and, as a subproduct, it allows to look for all the relations (monomorphisms) between two patterns or the result of doing some operations by combining patterns in di erent ways. The results of these operations are visually shown and therefore the tool makes these operations more understandable. The implementation of the algorithm has been written in Prolog but the prototype has a Java interface for an easy and friendly use.
Resumo:
Smart and mobile environments require seamless connections. However, due to the frequent process of ''discovery'' and disconnection of mobile devices while data interchange is happening, wireless connections are often interrupted. To minimize this drawback, a protocol that enables an easy and fast synchronization is crucial. Bearing this in mind, Bluetooth technology appears to be a suitable solution to carry on such connections due to the discovery and pairing capabilities it provides. Nonetheless, the time and energy spent when several devices are being discovered and used at the same time still needs to be managed properly. It is essential that this process of discovery takes as little time and energy as possible. In addition to this, it is believed that the performance of the communications is not constant when the transmission speeds and throughput increase, but this has not been proved formally. Therefore, the purpose of this project is twofold: Firstly, to design and build a framework-system capable of performing controlled Bluetooth device discovery, pairing and communications. Secondly, to analyze and test the scalability and performance of the \emph{classic} Bluetooth standard under different scenarios and with various sensors and devices using the framework developed. To achieve the first goal, a generic Bluetooth platform will be used to control the test conditions and to form a ubiquitous wireless system connected to an Android Smartphone. For the latter goal, various stress-tests will be carried on to measure the consumption rate of battery life as well as the quality of the communications between the devices involved.
Resumo:
Two of the most important questions in mantle dynamics are investigated in three separate studies: the influence of phase transitions (studies 1 and 2), and the influence of temperature-dependent viscosity (study 3).
(1) Numerical modeling of mantle convection in a three-dimensional spherical shell incorporating the two major mantle phase transitions reveals an inherently three-dimensional flow pattern characterized by accumulation of cold downwellings above the 670 km discontinuity, and cylindrical 'avalanches' of upper mantle material into the lower mantle. The exothermic phase transition at 400 km depth reduces the degree of layering. A region of strongly-depressed temperature occurs at the base of the mantle. The temperature field is strongly modulated by this partial layering, both locally and in globally-averaged diagnostics. Flow penetration is strongly wavelength-dependent, with easy penetration at long wavelengths but strong inhibition at short wavelengths. The amplitude of the geoid is not significantly affected.
(2) Using a simple criterion for the deflection of an upwelling or downwelling by an endothermic phase transition, the scaling of the critical phase buoyancy parameter with the important lengthscales is obtained. The derived trends match those observed in numerical simulations, i.e., deflection is enhanced by (a) shorter wavelengths, (b) narrower up/downwellings (c) internal heating and (d) narrower phase loops.
(3) A systematic investigation into the effects of temperature-dependent viscosity on mantle convection has been performed in three-dimensional Cartesian geometry, with a factor of 1000-2500 viscosity variation, and Rayleigh numbers of 10^5-10^7. Enormous differences in model behavior are found, depending on the details of rheology, heating mode, compressibility and boundary conditions. Stress-free boundaries, compressibility, and temperature-dependent viscosity all favor long-wavelength flows, even in internally heated cases. However, small cells are obtained with some parameter combinations. Downwelling plumes and upwelling sheets are possible when viscosity is dependent solely on temperature. Viscous dissipation becomes important with temperature-dependent viscosity.
The sensitivity of mantle flow and structure to these various complexities illustrates the importance of performing mantle convection calculations with rheological and thermodynamic properties matching as closely as possible those of the Earth.
Resumo:
The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.
Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.
Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.
Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.
Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.
Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.
Resumo:
Over the past five years, the cost of solar panels has dropped drastically and, in concert, the number of installed modules has risen exponentially. However, solar electricity is still more than twice as expensive as electricity from a natural gas plant. Fortunately, wire array solar cells have emerged as a promising technology for further lowering the cost of solar.
Si wire array solar cells are formed with a unique, low cost growth method and use 100 times less material than conventional Si cells. The wires can be embedded in a transparent, flexible polymer to create a free-standing array that can be rolled up for easy installation in a variety of form factors. Furthermore, by incorporating multijunctions into the wire morphology, higher efficiencies can be achieved while taking advantage of the unique defect relaxation pathways afforded by the 3D wire geometry.
The work in this thesis shepherded Si wires from undoped arrays to flexible, functional large area devices and laid the groundwork for multijunction wire array cells. Fabrication techniques were developed to turn intrinsic Si wires into full p-n junctions and the wires were passivated with a-Si:H and a-SiNx:H. Single wire devices yielded open circuit voltages of 600 mV and efficiencies of 9%. The arrays were then embedded in a polymer and contacted with a transparent, flexible, Ni nanoparticle and Ag nanowire top contact. The contact connected >99% of the wires in parallel and yielded flexible, substrate free solar cells featuring hundreds of thousands of wires.
Building on the success of the Si wire arrays, GaP was epitaxially grown on the material to create heterostructures for photoelectrochemistry. These cells were limited by low absorption in the GaP due to its indirect bandgap, and poor current collection due to a diffusion length of only 80 nm. However, GaAsP on SiGe offers a superior combination of materials, and wire architectures based on these semiconductors were investigated for multijunction arrays. These devices offer potential efficiencies of 34%, as demonstrated through an analytical model and optoelectronic simulations. SiGe and Ge wires were fabricated via chemical-vapor deposition and reactive ion etching. GaAs was then grown on these substrates at the National Renewable Energy Lab and yielded ns lifetime components, as required for achieving high efficiency devices.
Resumo:
Optical frequency combs (OFCs) provide direct phase-coherent link between optical and RF frequencies, and enable precision measurement of optical frequencies. In recent years, a new class of frequency combs (microcombs) have emerged based on parametric frequency conversions in dielectric microresonators. Micocombs have large line spacing from 10's to 100's GHz, allowing easy access to individual comb lines for arbitrary waveform synthesis. They also provide broadband parametric gain bandwidth, not limited by specific atomic or molecular transitions in conventional OFCs. The emerging applications of microcombs include low noise microwave generation, astronomical spectrograph calibration, direct comb spectroscopy, and high capacity telecommunications.
In this thesis, research is presented starting with the introduction of a new type of chemically etched, planar silica-on-silicon disk resonator. A record Q factor of 875 million is achieved for on-chip devices. A simple and accurate approach to characterize the FSR and dispersion of microcavities is demonstrated. Microresonator-based frequency combs (microcombs) are demonstrated with microwave repetition rate less than 80 GHz on a chip for the first time. Overall low threshold power (as low as 1 mW) of microcombs across a wide range of resonator FSRs from 2.6 to 220 GHz in surface-loss-limited disk resonators is demonstrated. The rich and complex dynamics of microcomb RF noise are studied. High-coherence, RF phase-locking of microcombs is demonstrated where injection locking of the subcomb offset frequencies are observed by pump-detuning-alignment. Moreover, temporal mode locking, featuring subpicosecond pulses from a parametric 22 GHz microcomb, is observed. We further demonstrated a shot-noise-limited white phase noise of microcomb for the first time. Finally, stabilization of the microcomb repetition rate is realized by phase lock loop control.
For another major nonlinear optical application of disk resonators, highly coherent, simulated Brillouin lasers (SBL) on silicon are also demonstrated, with record low Schawlow-Townes noise less than 0.1 Hz^2/Hz for any chip-based lasers and low technical noise comparable to commercial narrow-linewidth fiber lasers. The SBL devices are efficient, featuring more than 90% quantum efficiency and threshold as low as 60 microwatts. Moreover, novel properties of the SBL are studied, including cascaded operation, threshold tuning, and mode-pulling phenomena. Furthermore, high performance microwave generation using on-chip cascaded Brillouin oscillation is demonstrated. It is also robust enough to enable incorporation as the optical voltage-controlled-oscillator in the first demonstration of a photonic-based, microwave frequency synthesizer. Finally, applications of microresonators as frequency reference cavities and low-phase-noise optomechanical oscillators are presented.
Resumo:
[EUS] Gizartean dauden adimen gaitasun handiko haurrak antzematea ez da erraza. Hori dela eta, ikasle hauen ezaugarriak antzematea izan da lan honen muina. Egile eta teoria ezberdinetan oinarrituta, bi ekarpen didaktiko aurrera eraman dira. Ikasle hauek eta hauen ezaugarriak ezagutzera ematen dituen dokumentala burutu da, non kolektibo honen inguruan sinesten diren mitoak eta uste okerrak desmitifikatzen diren. Lanaren bigarren ekarpena, haurrak identifikatzerako orduan familia eta irakasleentzako lagungarriak izango diren behaketa- tresnen zerrenda izan da, edozein ingurunean egonda ere haur hauek antzematen lagunduko duena.
Resumo:
DNA damage is extremely detrimental to the cell and must be repaired to protect the genome. DNA is capable of conducting charge through the overlapping π-orbitals of stacked bases; this phenomenon is extremely sensitive to the integrity of the π-stack, as perturbations attenuate DNA charge transport (CT). Based on the E. coli base excision repair (BER) proteins EndoIII and MutY, it has recently been proposed that redox-active proteins containing metal clusters can utilize DNA CT to signal one another to locate sites of DNA damage.
To expand our repertoire of proteins that utilize DNA-mediated signaling, we measured the DNA-bound redox potential of the nucleotide excision repair (NER) helicase XPD from Sulfolobus acidocaldarius. A midpoint potential of 82 mV versus NHE was observed, resembling that of the previously reported BER proteins. The redox signal increases in intensity with ATP hydrolysis in only the WT protein and mutants that maintain ATPase activity and not for ATPase-deficient mutants. The signal increase correlates directly with ATP activity, suggesting that DNA-mediated signaling may play a general role in protein signaling. Several mutations in human XPD that lead to XP-related diseases have been identified; using SaXPD, we explored how these mutations, which are conserved in the thermophile, affect protein electrochemistry.
To further understand the electrochemical signaling of XPD, we studied the yeast S. cerevisiae Rad3 protein. ScRad3 mutants were incubated on a DNA-modified electrode and exhibited a similar redox potential to SaXPD. We developed a haploid strain of S. cerevisiae that allowed for easy manipulation of Rad3. In a survival assay, the ATPase- and helicase-deficient mutants show little survival, while the two disease-related mutants exhibit survival similar to WT. When both a WT and G47R (ATPase/helicase deficient) strain were challenged with different DNA damaging agents, both exhibited comparable survival in the presence of hydroxyurea, while with methyl methanesulfonate and camptothecin, the G47R strain exhibits a significant change in growth, suggesting that Rad3 is involved in repairing damage beyond traditional NER substrates. Together, these data expand our understanding of redox-active proteins at the interface of DNA repair.
Resumo:
Aniztasun funtzionala duten pertsonen hezkuntzaren inguruko GALa da hau. Hezkuntzak arlo honetan izan duen ibilbidea ezagutu ostean, EAE eta Irlandako hezkuntza sistemak aztertzen dituena. Ikerketa lana da hau eta gaiaren inguruko azterketa bibliografiko sakona izateaz gain bi herrialdeetan datu bilketa egin izan da, bi herrialdeetako irakasleak betetako galdetegien bitartez. Tresna hauen helburua batzuen zein besteen jarrerak ezagutzea eta konparatzea zen, hauen garrantziaz jabetuz. Ikerketa lanaren ostean bi herrialdeek heziketa berezia ulertzeko bi modu desberdin dituztela azaleratzen da: EAEn integraziotik inklusiora bidean gauden bitartean, Irlandan kontraesan ugari daude; izan ere, inklusioa bultzatu nahi da, baina eskola bereziak mantenduz. Irlandan badute zer aldatu, baina bertan ere badugu. Inklusioa da jarraitu beharreko bidea; ez da erraza izango, baina aniztasun funtzionala duten haurrekin elkar bizitzea aukera bat da eta guztiok izan gaitezke irabazle.
Resumo:
47 p.
Resumo:
How powerful are Quantum Computers? Despite the prevailing belief that Quantum Computers are more powerful than their classical counterparts, this remains a conjecture backed by little formal evidence. Shor's famous factoring algorithm [Shor97] gives an example of a problem that can be solved efficiently on a quantum computer with no known efficient classical algorithm. Factoring, however, is unlikely to be NP-Hard, meaning that few unexpected formal consequences would arise, should such a classical algorithm be discovered. Could it then be the case that any quantum algorithm can be simulated efficiently classically? Likewise, could it be the case that Quantum Computers can quickly solve problems much harder than factoring? If so, where does this power come from, and what classical computational resources do we need to solve the hardest problems for which there exist efficient quantum algorithms?
We make progress toward understanding these questions through studying the relationship between classical nondeterminism and quantum computing. In particular, is there a problem that can be solved efficiently on a Quantum Computer that cannot be efficiently solved using nondeterminism? In this thesis we address this problem from the perspective of sampling problems. Namely, we give evidence that approximately sampling the Quantum Fourier Transform of an efficiently computable function, while easy quantumly, is hard for any classical machine in the Polynomial Time Hierarchy. In particular, we prove the existence of a class of distributions that can be sampled efficiently by a Quantum Computer, that likely cannot be approximately sampled in randomized polynomial time with an oracle for the Polynomial Time Hierarchy.
Our work complements and generalizes the evidence given in Aaronson and Arkhipov's work [AA2013] where a different distribution with the same computational properties was given. Our result is more general than theirs, but requires a more powerful quantum sampler.
Diseño de un aerogenerador de muy baja potencia para la alimentación de sistemas eléctricos aislados
Resumo:
[ES]En este Proyecto se desarrolla el diseño y la optimización de un aerogenerador de muy pequeña potencia enfocado a alimentar sistemas eléctricos aislados; concretamente, un sistema de iluminación. Se analizan todos los componentes del sistema aerogenerador, comparando las diversas alternativas posibles y eligiendo soluciones sencillas, baratas y eficientes para cada uno de ellos. Asimismo, se lleva a cabo un primer dimensionamiento de los mismos. El esfuerzo de diseño se ha centrado en las palas, por considerarse el componente más crítico y menos normalizado. Se han estudiado herramientas para su análisis detallado y su optimización desde el punto de vista aerodinámico, y éstas han sido implementadas en Matlab. Esto ha permitido generar un diseño óptimo ajustado a las necesidades del sistema. Asimismo, se han realizado las correspondientes tareas de planificación, analizado los múltiples beneficios de tipo técnico, económico, social y ambiental que la ejecución del presente Proyecto reportará, haciendo una programación completa del mismo y un análisis de los costes que supondrá.
Resumo:
O instituto da gestão democrática da cidade ocupa lugar central nos debates e nas reflexões acerca do Direito da Cidade e do Estatuto da Cidade. Especificamente no que se refere ao conteúdo dessas reflexões, a gestão democrática alcança impressionante consenso, não sendo coisa fácil encontrar razões no sentido de negar a validade da construção de uma cidadania ativa para gestão dos negócios urbanos. Este trabalho quer entender, concretamente, a razão do citado consenso, ancorado em uma concepção crítica que tenta chamar atenção para aspectos do social que não costumam ser contemplados pela análise realizada entre os juristas que se dedicam ao tema. Neste sentido, questões como a da alienação do trabalho e sua relação com a alienação política e a situação real das cidades a partir da categoria de análise da FES (formação econômica e social) não devem ser ignoradas quando das discussões sobre as possibilidades de efetivação de atuação citadina ativa nas cidades brasileiras. Em tempo, é necessário, a partir das categorias destacadas, analisar criticamente os instrumentos do Estatuto da Cidade para que não se permitam maiores ilusões acerca do estado de coisas aqui discutido.
Resumo:
The attitude of the medieval church towards violence before the First Crusade in 1095 underwent a significant institutional evolution, from the peaceful tradition of the New Testament and the Roman persecution, through the prelate-led military campaigns of the Carolingian period and the Peace of God era. It would be superficially easy to characterize this transformation as the pragmatic and entirely secular response of a growing power to the changing world. However, such a simplification does not fully do justice to the underlying theology. While church leaders from the 5th Century to the 11th had vastly different motivations and circumstances under which to develop their responses to a variety of violent activities, the teachings of Augustine of Hippo provided a unifying theme. Augustine’s just war theology, in establishing which conflicts are acceptable in the eyes of God, focused on determining whether a proper causa belli or basis for war exists, and then whether a legitimate authority declares and leads the war. Augustine masterfully integrated aspects of the Old and New Testaments to create a lasting and compelling case for his definition of justified violence. Although at different times and places his theology has been used to support a variety of different attitudes, the profound influence of his work on the medieval church’s evolving position on violence is clear.