907 resultados para control over life
Resumo:
The response of phytoplankton assemblages to hydrographical forcing across the southern Brazilian shelf was studied based on data collected during wintertime (June/2012), complemented with MODIS-Aqua satellite imagery. The in situ data set was comprised by water column structure properties (derived from CTD casts), dissolved inorganic nutrients (ammonium, nitrite, nitrate, phosphate and silicate) and phytoplankton biomass [chlorophyll a (Chl a) concentration] and composition. Phytoplankton assemblages were assessed by both microscopy and HPLC-CHEMTAX approaches. A canonical correspondence analysis associating physical, chemical and phytoplankton composition data at surface evinced a tight coupling between the phytoplankton community and hydrographic conditions, with remarkable environmental gradients across three different domains: the pelagic, outer shelf Tropical Water (TW); the mid shelf domain under influence of Subtropical Shelf Water (STSW); and the inner shelf domain mainly under influence of riverine outflow of the Plata River Plume Water (PPW). Results showed that intrusion of low salinity and nutrient-rich PPW stimulated the phytoplankton growth and diversity within the inner shelf region, with enhanced Chl a levels (>1.3 mg/m**3) and a great abundance of diatoms, ciliates, dinoflagellates, raphidophyceans and cryptophytes. Conversely, other diatoms (e.g. Rhizosolenia clevei), tiny species of prochlorophytes and cyanobacteria and a noticeable contribution of dinoflagellates and other flagellates associated with lower Chl a levels (<0.93 mg/m**3), characterized the TW domain, where low nutrient concentrations and deep upper mixed layer were found. The transitional mid shelf domain showed intermediate levels of both nutrients and Chl a (ranging 1.06-1.59 mg/m**3), and phytoplankton was mainly composed by dinoflagellates, such as Dinophysis spp., and gymnodinioids. Results have shown considerable phytoplankton diversity in winter at that section of the southwestern Atlantic Ocean.
Resumo:
1. Developing a framework for assessing interactions between multiple anthropogenic stressors remains an important goal in environmental research. In coastal ecosystems, the relative effects of aspects of global climate change (e.g. CO2 concentrations) and localized stressors (e.g. eutrophication), in combination, have received limited attention. 2. Using a long-term (11 month) field experiment, we examine how epiphyte assemblages in a tropical seagrass meadow respond to factorial manipulations of dissolved carbon dioxide (CO2(aq)) and nutrient enrichment. In situ CO2(aq) manipulations were conducted using clear, open-top chambers, which replicated carbonate parameter forecasts for the year 2100. Nutrient enrichment consisted of monthly additions of slow-release fertilizer, nitrogen (N) and phosphorus (P), to the sediments at rates equivalent to theoretical maximum rates of anthropogenic loading within the region (1.54 g N/m**2/d and 0.24 g P m**2/d). 3. Epiphyte community structure was assessed on a seasonal basis and revealed declines in the abundance of coralline algae, along with increases in filamentous algae under elevated CO2(aq). Surprisingly, nutrient enrichment had no effect on epiphyte community structure or overall epiphyte loading. Interactions between CO2(aq) and nutrient enrichment were not detected. Furthermore, CO2(aq)-mediated responses in the epiphyte community displayed strong seasonality, suggesting that climate change studies in variable environments should be conducted over extended time-scales. 4. Synthesis. The observed responses indicate that for certain locations, global stressors such as ocean acidification may take precedence over local eutrophication in altering the community structure of seagrass epiphyte assemblages. Given that nutrient-driven algal overgrowth is commonly cited as a widespread cause of seagrass decline, our findings highlight that alternate climate change forces may exert proximate control over epiphyte community structure.
Resumo:
Coral reefs are essential to many nations, and are currently in global decline. Although climate models predict decreases in seawater pH (0.3 units) and oxygen saturation (5 percentage points), these are exceeded by the current daily pH and oxygen fluctuations on many reefs (pH 7.8-8.7 and 27-241% O2 saturation). We investigated the effect of oxygen and pH fluctuations on coral calcification in the laboratory using the model species Acropora millepora. Light calcification rates were greatly enhanced (+178%) by increased seawater pH, but only at normoxia; hyperoxia completely negated this positive effect. Dark calcification rates were significantly inhibited (51-75%) at hypoxia, whereas pH had no effect. Our preliminary results suggest that within the current oxygen and pH range, oxygen has substantial control over coral growth, whereas the role of pH is limited. This has implications for reef formation in this era of rapid climate change, which is accompanied by a decrease in seawater oxygen saturation owing to higher water temperatures and coastal eutrophication.
Resumo:
As in many other developing countries, family businesses are major players in the Peruvian economy. Despite their growth into large-scale groups spanning a wide range of businesses, the owner families still have strong control over their ownership and management. However, Peru's liberal economic reforms in the 1990s brought intense competition into the national market. Not only have these family businesses been forced to compete against large-scale foreign capital that entered the national market through the privatization of state enterprises, but also against cheap goods imported from foreign countries. In order to compete, family businesses have had to move beyond the limited human resources available within the family. The advancement within owner families of new generations with better education and training together with the promotion to top managerial positions of professional salaried managers from outside the family are some of the measures owner families are taking to overcome their human resource limitations.
Resumo:
It is argued that joint stock companies would be transformed from family firms to managerial firms with their development in size and scope. Such managerial firms would have many small shareholders; hence the ownership and management of the firm would be separated. However, in many developing countries including Peru, family businesses, in which families control both ownership and management, still play an important role in the national economy. After the liberalization of economy, which started in Peru in the 1990s, the national market has become more competitive due to the increase in participation of foreign capitals. To secure competitiveness, it is indispensable for family businesses to obtain management resources such as financial, human and technological resources from outside of the families. In order to do so without losing the control over ownership and management, Peruvian family businesses have incorporated companies with distinct characteristics to the extent that they can secure the control over ownership and management inside of their group. While keeping exclusive control of companies in traditional sectors, they actively seek alliance with other families and foreign capitals in competitive sectors. The management of companies with different degrees of control allows them to survive in today's rapidly changing business environment.
Resumo:
Accurate control over the spent nuclear fuel content is essential for its safe and optimized transportation, storage and management. Consequently, the reactivity of spent fuel and its isotopic content must be accurately determined. Nowadays, to predict isotopic evolution throughout irradiation and decay periods is not a problem thanks to the development of powerful codes and methodologies. In order to have a realistic confidence level in the prediction of spent fuel isotopic content, it is desirable to determine how uncertainties in the basic nuclear data affect isotopic prediction calculations by quantifying their associated uncertainties
Resumo:
Accurate control over the spent nuclear fuel content is essential for its safe and optimized transportation, storage and management. Consequently, the reactivity of spent fuel and its isotopic content must be accurately determined.
Resumo:
A novel time integration scheme is presented for the numerical solution of the dynamics of discrete systems consisting of point masses and thermo-visco-elastic springs. Even considering fully coupled constitutive laws for the elements, the obtained solutions strictly preserve the two laws of thermo dynamics and the symmetries of the continuum evolution equations. Moreover, the unconditional control over the energy and the entropy growth have the effect of stabilizing the numerical solution, allowing the use of larger time steps than those suitable for comparable implicit algorithms. Proofs for these claims are provided in the article as well as numerical examples that illustrate the performance of the method.
Resumo:
The development of high efficiency laser diodes (LD) and light emitting diodes (LED) covering the 1.0 to 1.55 μm region of the spectra using GaAs heteroepitaxy has been long pursued. Due to the lack of materials that can be grown lattice-macthed to GaAs with bandgaps in the 1.0 to 1.55 μm region, quantum wells (QW) or quantum dots (QD) need be used. The most successful approach with QWs has been to use InGaAs, but one needs to add another element, such as N, to be able to reach 1.3/1.5μm. Even though LDs have been successfully demonstrated with the QW approach, using N leads to problems with compositional homogeneity across the wafer, and limited efficiency due to strong non-radiative recombination. The alternative approach of using InAs QDs is an attractive option, but once again, to reach the longest wavelengths one needs very large QDs and control over the size distribution and band alignment. In this work we demonstrate InAs/GaAsSb QDLEDs with high efficiencies, emitting from 1.1 to 1.52 μm, and we analyze the band alignment and carrier loss mechanisms that result from the presence of Sb in the capping layer.
Resumo:
Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of estimating the target’s position when we use received signal strength indicator (RSSI) due to the nonlinear relationship between the measured signal and the true position of the target. Many of the existing approaches suffer either from high computational complexity (e.g., particle filters) or lack of accuracy. Further, many of the proposed solutions are centralized which make their application to a sensor network questionable. Depending on the application at hand and, from a practical perspective it could be convenient to find a balance between localization accuracy and complexity. Into this direction we approach the maximum likelihood location estimation problem by solving a suboptimal (and more tractable) problem. One of the main advantages of the proposed scheme is that it allows for a decentralized implementation using distributed processing tools (e.g., consensus and convex optimization) and therefore, it is very suitable to be implemented in real sensor networks. If further accuracy is needed an additional refinement step could be performed around the found solution. Under the assumption of independent noise among the nodes such local search can be done in a fully distributed way using a distributed version of the Gauss-Newton method based on consensus. Regardless of the underlying application or function of the sensor network it is al¬ways necessary to have a mechanism for data reporting. While some approaches use a special kind of nodes (called sink nodes) for data harvesting and forwarding to the outside world, there are however some scenarios where such an approach is impractical or even impossible to deploy. Further, such sink nodes become a bottleneck in terms of traffic flow and power consumption. To overcome these issues instead of using sink nodes for data reporting one could use collaborative beamforming techniques to forward directly the generated data to a base station or gateway to the outside world. In a dis-tributed environment like a sensor network nodes cooperate in order to form a virtual antenna array that can exploit the benefits of multi-antenna communications. In col-laborative beamforming nodes synchronize their phases in order to add constructively at the receiver. Some of the inconveniences associated with collaborative beamforming techniques is that there is no control over the radiation pattern since it is treated as a random quantity. This may cause interference to other coexisting systems and fast bat-tery depletion at the nodes. Since energy-efficiency is a major design issue we consider the development of a distributed collaborative beamforming scheme that maximizes the network lifetime while meeting some quality of service (QoS) requirement at the re¬ceiver side. Using local information about battery status and channel conditions we find distributed algorithms that converge to the optimal centralized beamformer. While in the first part we consider only battery depletion due to communications beamforming, we extend the model to account for more realistic scenarios by the introduction of an additional random energy consumption. It is shown how the new problem generalizes the original one and under which conditions it is easily solvable. By formulating the problem under the energy-efficiency perspective the network’s lifetime is significantly improved. Resumen La proliferación de las redes inalámbricas de sensores junto con la gran variedad de posi¬bles aplicaciones relacionadas, han motivado el desarrollo de herramientas y algoritmos necesarios para el procesado cooperativo en sistemas distribuidos. Una de las aplicaciones que suscitado mayor interés entre la comunidad científica es la de localization, donde el conjunto de nodos de la red intenta estimar la posición de un blanco localizado dentro de su área de cobertura. El problema de la localization es especialmente desafiante cuando se usan niveles de energía de la seal recibida (RSSI por sus siglas en inglés) como medida para la localization. El principal inconveniente reside en el hecho que el nivel de señal recibida no sigue una relación lineal con la posición del blanco. Muchas de las soluciones actuales al problema de localization usando RSSI se basan en complejos esquemas centralizados como filtros de partículas, mientas que en otras se basan en esquemas mucho más simples pero con menor precisión. Además, en muchos casos las estrategias son centralizadas lo que resulta poco prácticos para su implementación en redes de sensores. Desde un punto de vista práctico y de implementation, es conveniente, para ciertos escenarios y aplicaciones, el desarrollo de alternativas que ofrezcan un compromiso entre complejidad y precisión. En esta línea, en lugar de abordar directamente el problema de la estimación de la posición del blanco bajo el criterio de máxima verosimilitud, proponemos usar una formulación subóptima del problema más manejable analíticamente y que ofrece la ventaja de permitir en¬contrar la solución al problema de localization de una forma totalmente distribuida, convirtiéndola así en una solución atractiva dentro del contexto de redes inalámbricas de sensores. Para ello, se usan herramientas de procesado distribuido como los algorit¬mos de consenso y de optimización convexa en sistemas distribuidos. Para aplicaciones donde se requiera de un mayor grado de precisión se propone una estrategia que con¬siste en la optimización local de la función de verosimilitud entorno a la estimación inicialmente obtenida. Esta optimización se puede realizar de forma descentralizada usando una versión basada en consenso del método de Gauss-Newton siempre y cuando asumamos independencia de los ruidos de medida en los diferentes nodos. Independientemente de la aplicación subyacente de la red de sensores, es necesario tener un mecanismo que permita recopilar los datos provenientes de la red de sensores. Una forma de hacerlo es mediante el uso de uno o varios nodos especiales, llamados nodos “sumidero”, (sink en inglés) que actúen como centros recolectores de información y que estarán equipados con hardware adicional que les permita la interacción con el exterior de la red. La principal desventaja de esta estrategia es que dichos nodos se convierten en cuellos de botella en cuanto a tráfico y capacidad de cálculo. Como alter¬nativa se pueden usar técnicas cooperativas de conformación de haz (beamforming en inglés) de manera que el conjunto de la red puede verse como un único sistema virtual de múltiples antenas y, por tanto, que exploten los beneficios que ofrecen las comu¬nicaciones con múltiples antenas. Para ello, los distintos nodos de la red sincronizan sus transmisiones de manera que se produce una interferencia constructiva en el recep¬tor. No obstante, las actuales técnicas se basan en resultados promedios y asintóticos, cuando el número de nodos es muy grande. Para una configuración específica se pierde el control sobre el diagrama de radiación causando posibles interferencias sobre sis¬temas coexistentes o gastando más potencia de la requerida. La eficiencia energética es una cuestión capital en las redes inalámbricas de sensores ya que los nodos están equipados con baterías. Es por tanto muy importante preservar la batería evitando cambios innecesarios y el consecuente aumento de costes. Bajo estas consideraciones, se propone un esquema de conformación de haz que maximice el tiempo de vida útil de la red, entendiendo como tal el máximo tiempo que la red puede estar operativa garantizando unos requisitos de calidad de servicio (QoS por sus siglas en inglés) que permitan una decodificación fiable de la señal recibida en la estación base. Se proponen además algoritmos distribuidos que convergen a la solución centralizada. Inicialmente se considera que la única causa de consumo energético se debe a las comunicaciones con la estación base. Este modelo de consumo energético es modificado para tener en cuenta otras formas de consumo de energía derivadas de procesos inherentes al funcionamiento de la red como la adquisición y procesado de datos, las comunicaciones locales entre nodos, etc. Dicho consumo adicional de energía se modela como una variable aleatoria en cada nodo. Se cambia por tanto, a un escenario probabilístico que generaliza el caso determinista y se proporcionan condiciones bajo las cuales el problema se puede resolver de forma eficiente. Se demuestra que el tiempo de vida de la red mejora de forma significativa usando el criterio propuesto de eficiencia energética.
Resumo:
El mundo tecnológico está cambiando hacia la optimización en la gestión de recursos gracias a la poderosa influencia de tecnologías como la virtualización y la computación en la nube (Cloud Computing). En esta memoria se realiza un acercamiento a las mismas, desde las causas que las motivaron hasta sus últimas tendencias, pasando por la identificación de sus principales características, ventajas e inconvenientes. Por otro lado, el Hogar Digital es ya una realidad para la mayoría de los seres humanos. En él se dispone de acceso a múltiples tipos de redes de telecomunicaciones (3G, 4G, WI-FI, ADSL…) con más o menos capacidad pero que permiten conexiones a internet desde cualquier parte, en todo momento, y con prácticamente cualquier dispositivo (ordenadores personales, smartphones, tabletas, televisores…). Esto es aprovechado por las empresas para ofrecer todo tipo de servicios. Algunos de estos servicios están basados en el cloud computing sobre todo ofreciendo almacenamiento en la nube a aquellos dispositivos con capacidad reducida, como son los smarthphones y las tabletas. Ese espacio de almacenamiento normalmente está en los servidores bajo el control de grandes compañías. Guardar documentos, videos, fotos privadas sin tener la certeza de que estos no son consultados por alguien sin consentimiento, puede despertar en el usuario cierto recelo. Para estos usuarios que desean control sobre su intimidad, se ofrece la posibilidad de que sea el propio usuario el que monte sus propios servidores y su propio servicio cloud para compartir su información privada sólo con sus familiares y amigos o con cualquiera al que le dé permiso. Durante el proyecto se han comparado diversas soluciones, la mayoría de código abierto y de libre distribución, que permiten desplegar como mínimo un servicio de almacenamiento accesible a través de Internet. Algunas de ellas lo complementan con servicios de streaming tanto de música como de videos, compartición y sincronización de documentos entre múltiples dispositivos, calendarios, copias de respaldo (backups), virtualización de escritorios, versionado de ficheros, chats, etc. El proyecto finaliza con una demostración de cómo utilizar dispositivos de un hogar digital interactuando con un servidor Cloud, en el que previamente se ha instalado y configurado una de las soluciones comparadas. Este servidor quedará empaquetado en una máquina virtual para que sea fácilmente transportable e utilizable. ABSTRACT. The technological world is changing towards optimizing resource management thanks to the powerful influence of technologies such as Virtualization and Cloud Computing. This document presents a closer approach to them, from the causes that have motivated to their last trends, as well as showing their main features, advantages and disadvantages. In addition, the Digital Home is a reality for most humans. It provides access to multiple types of telecommunication networks (3G, 4G, WI-FI, ADSL...) with more or less capacity, allowing Internet connections from anywhere, at any time, and with virtually any device (computer personal smartphones, tablets, televisions...).This is used by companies to provide all kinds of services. Some of these services offer storage on the cloud to devices with limited capacity, such as smartphones and tablets. That is normally storage space on servers under the control of important companies. Saving private documents, videos, photos, without being sure that they are not viewed by anyone without consent, can wake up suspicions in some users. For those users who want control over their privacy, it offers the possibility that it is the user himself to mount his own server and its own cloud service to share private information only with family and friends or with anyone with consent. During the project I have compared different solutions, most open source and with GNU licenses, for deploying one storage facility accessible via the Internet. Some supplement include streaming services of music , videos or photos, sharing and syncing documents across multiple devices, calendars, backups, desktop virtualization, file versioning, chats... The project ends with a demonstration of how to use our digital home devices interacting with a cloud server where one of the solutions compared is installed and configured. This server will be packaged in a virtual machine to be easily transportable and usable.