891 resultados para Data Storage Solutions


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The W3C Best Practises for Multilingual Linked Open Data community group was born one year ago during the last MLW workshop in Rome. Nowadays, it continues leading the effort of a numerous community towards acquiring a shared view of the issues caused by multilingualism on the Web of Data and their possible solutions. Despite our initial optimism, we found the task of identifying best practises for ML-LOD a difficult one, requiring a deep understanding of the Web of Data in its multilingual dimension and in its practical problems. In this talk we will review the progresses of the group so far, mainly in the identification and analysis of topics, use cases, and design patterns, as well as the future challenges.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decade, multi-sensor data fusion has become a broadly demanded discipline to achieve advanced solutions that can be applied in many real world situations, either civil or military. In Defence,accurate detection of all target objects is fundamental to maintaining situational awareness, to locating threats in the battlefield and to identifying and protecting strategically own forces. Civil applications, such as traffic monitoring, have similar requirements in terms of object detection and reliable identification of incidents in order to ensure safety of road users. Thanks to the appropriate data fusion technique, we can give these systems the power to exploit automatically all relevant information from multiple sources to face for instance mission needs or assess daily supervision operations. This paper focuses on its application to active vehicle monitoring in a particular area of high density traffic, and how it is redirecting the research activities being carried out in the computer vision, signal processing and machine learning fields for improving the effectiveness of detection and tracking in ground surveillance scenarios in general. Specifically, our system proposes fusion of data at a feature level which is extracted from a video camera and a laser scanner. In addition, a stochastic-based tracking which introduces some particle filters into the model to deal with uncertainty due to occlusions and improve the previous detection output is presented in this paper. It has been shown that this computer vision tracker contributes to detect objects even under poor visual information. Finally, in the same way that humans are able to analyze both temporal and spatial relations among items in the scene to associate them a meaning, once the targets objects have been correctly detected and tracked, it is desired that machines can provide a trustworthy description of what is happening in the scene under surveillance. Accomplishing so ambitious task requires a machine learning-based hierarchic architecture able to extract and analyse behaviours at different abstraction levels. A real experimental testbed has been implemented for the evaluation of the proposed modular system. Such scenario is a closed circuit where real traffic situations can be simulated. First results have shown the strength of the proposed system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Los Centros de Datos se encuentran actualmente en cualquier sector de la economía mundial. Están compuestos por miles de servidores, dando servicio a los usuarios de forma global, las 24 horas del día y los 365 días del año. Durante los últimos años, las aplicaciones del ámbito de la e-Ciencia, como la e-Salud o las Ciudades Inteligentes han experimentado un desarrollo muy significativo. La necesidad de manejar de forma eficiente las necesidades de cómputo de aplicaciones de nueva generación, junto con la creciente demanda de recursos en aplicaciones tradicionales, han facilitado el rápido crecimiento y la proliferación de los Centros de Datos. El principal inconveniente de este aumento de capacidad ha sido el rápido y dramático incremento del consumo energético de estas infraestructuras. En 2010, la factura eléctrica de los Centros de Datos representaba el 1.3% del consumo eléctrico mundial. Sólo en el año 2012, el consumo de potencia de los Centros de Datos creció un 63%, alcanzando los 38GW. En 2013 se estimó un crecimiento de otro 17%, hasta llegar a los 43GW. Además, los Centros de Datos son responsables de más del 2% del total de emisiones de dióxido de carbono a la atmósfera. Esta tesis doctoral se enfrenta al problema energético proponiendo técnicas proactivas y reactivas conscientes de la temperatura y de la energía, que contribuyen a tener Centros de Datos más eficientes. Este trabajo desarrolla modelos de energía y utiliza el conocimiento sobre la demanda energética de la carga de trabajo a ejecutar y de los recursos de computación y refrigeración del Centro de Datos para optimizar el consumo. Además, los Centros de Datos son considerados como un elemento crucial dentro del marco de la aplicación ejecutada, optimizando no sólo el consumo del Centro de Datos sino el consumo energético global de la aplicación. Los principales componentes del consumo en los Centros de Datos son la potencia de computación utilizada por los equipos de IT, y la refrigeración necesaria para mantener los servidores dentro de un rango de temperatura de trabajo que asegure su correcto funcionamiento. Debido a la relación cúbica entre la velocidad de los ventiladores y el consumo de los mismos, las soluciones basadas en el sobre-aprovisionamiento de aire frío al servidor generalmente tienen como resultado ineficiencias energéticas. Por otro lado, temperaturas más elevadas en el procesador llevan a un consumo de fugas mayor, debido a la relación exponencial del consumo de fugas con la temperatura. Además, las características de la carga de trabajo y las políticas de asignación de recursos tienen un impacto importante en los balances entre corriente de fugas y consumo de refrigeración. La primera gran contribución de este trabajo es el desarrollo de modelos de potencia y temperatura que permiten describes estos balances entre corriente de fugas y refrigeración; así como la propuesta de estrategias para minimizar el consumo del servidor por medio de la asignación conjunta de refrigeración y carga desde una perspectiva multivariable. Cuando escalamos a nivel del Centro de Datos, observamos un comportamiento similar en términos del balance entre corrientes de fugas y refrigeración. Conforme aumenta la temperatura de la sala, mejora la eficiencia de la refrigeración. Sin embargo, este incremente de la temperatura de sala provoca un aumento en la temperatura de la CPU y, por tanto, también del consumo de fugas. Además, la dinámica de la sala tiene un comportamiento muy desigual, no equilibrado, debido a la asignación de carga y a la heterogeneidad en el equipamiento de IT. La segunda contribución de esta tesis es la propuesta de técnicas de asigación conscientes de la temperatura y heterogeneidad que permiten optimizar conjuntamente la asignación de tareas y refrigeración a los servidores. Estas estrategias necesitan estar respaldadas por modelos flexibles, que puedan trabajar en tiempo real, para describir el sistema desde un nivel de abstracción alto. Dentro del ámbito de las aplicaciones de nueva generación, las decisiones tomadas en el nivel de aplicación pueden tener un impacto dramático en el consumo energético de niveles de abstracción menores, como por ejemplo, en el Centro de Datos. Es importante considerar las relaciones entre todos los agentes computacionales implicados en el problema, de forma que puedan cooperar para conseguir el objetivo común de reducir el coste energético global del sistema. La tercera contribución de esta tesis es el desarrollo de optimizaciones energéticas para la aplicación global por medio de la evaluación de los costes de ejecutar parte del procesado necesario en otros niveles de abstracción, que van desde los nodos hasta el Centro de Datos, por medio de técnicas de balanceo de carga. Como resumen, el trabajo presentado en esta tesis lleva a cabo contribuciones en el modelado y optimización consciente del consumo por fugas y la refrigeración de servidores; el modelado de los Centros de Datos y el desarrollo de políticas de asignación conscientes de la heterogeneidad; y desarrolla mecanismos para la optimización energética de aplicaciones de nueva generación desde varios niveles de abstracción. ABSTRACT Data centers are easily found in every sector of the worldwide economy. They consist of tens of thousands of servers, serving millions of users globally and 24-7. In the last years, e-Science applications such e-Health or Smart Cities have experienced a significant development. The need to deal efficiently with the computational needs of next-generation applications together with the increasing demand for higher resources in traditional applications has facilitated the rapid proliferation and growing of data centers. A drawback to this capacity growth has been the rapid increase of the energy consumption of these facilities. In 2010, data center electricity represented 1.3% of all the electricity use in the world. In year 2012 alone, global data center power demand grew 63% to 38GW. A further rise of 17% to 43GW was estimated in 2013. Moreover, data centers are responsible for more than 2% of total carbon dioxide emissions. This PhD Thesis addresses the energy challenge by proposing proactive and reactive thermal and energy-aware optimization techniques that contribute to place data centers on a more scalable curve. This work develops energy models and uses the knowledge about the energy demand of the workload to be executed and the computational and cooling resources available at data center to optimize energy consumption. Moreover, data centers are considered as a crucial element within their application framework, optimizing not only the energy consumption of the facility, but the global energy consumption of the application. The main contributors to the energy consumption in a data center are the computing power drawn by IT equipment and the cooling power needed to keep the servers within a certain temperature range that ensures safe operation. Because of the cubic relation of fan power with fan speed, solutions based on over-provisioning cold air into the server usually lead to inefficiencies. On the other hand, higher chip temperatures lead to higher leakage power because of the exponential dependence of leakage on temperature. Moreover, workload characteristics as well as allocation policies also have an important impact on the leakage-cooling tradeoffs. The first key contribution of this work is the development of power and temperature models that accurately describe the leakage-cooling tradeoffs at the server level, and the proposal of strategies to minimize server energy via joint cooling and workload management from a multivariate perspective. When scaling to the data center level, a similar behavior in terms of leakage-temperature tradeoffs can be observed. As room temperature raises, the efficiency of data room cooling units improves. However, as we increase room temperature, CPU temperature raises and so does leakage power. Moreover, the thermal dynamics of a data room exhibit unbalanced patterns due to both the workload allocation and the heterogeneity of computing equipment. The second main contribution is the proposal of thermal- and heterogeneity-aware workload management techniques that jointly optimize the allocation of computation and cooling to servers. These strategies need to be backed up by flexible room level models, able to work on runtime, that describe the system from a high level perspective. Within the framework of next-generation applications, decisions taken at this scope can have a dramatical impact on the energy consumption of lower abstraction levels, i.e. the data center facility. It is important to consider the relationships between all the computational agents involved in the problem, so that they can cooperate to achieve the common goal of reducing energy in the overall system. The third main contribution is the energy optimization of the overall application by evaluating the energy costs of performing part of the processing in any of the different abstraction layers, from the node to the data center, via workload management and off-loading techniques. In summary, the work presented in this PhD Thesis, makes contributions on leakage and cooling aware server modeling and optimization, data center thermal modeling and heterogeneityaware data center resource allocation, and develops mechanisms for the energy optimization for next-generation applications from a multi-layer perspective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dentro del análisis y diseño estructural surgen frecuentemente problemas de ingeniería donde se requiere el análisis dinámico de grandes modelos de elementos finitos que llegan a millones de grados de libertad y emplean volúmenes de datos de gran tamaño. La complejidad y dimensión de los análisis se dispara cuando se requiere realizar análisis paramétricos. Este problema se ha abordado tradicionalmente desde diversas perspectivas: en primer lugar, aumentando la capacidad tanto de cálculo como de memoria de los sistemas informáticos empleados en los análisis. En segundo lugar, se pueden simplificar los análisis paramétricos reduciendo su número o detalle y por último se puede recurrir a métodos complementarios a los elementos .nitos para la reducción de sus variables y la simplificación de su ejecución manteniendo los resultados obtenidos próximos al comportamiento real de la estructura. Se propone el empleo de un método de reducción que encaja en la tercera de las opciones y consiste en un análisis simplificado que proporciona una solución para la respuesta dinámica de una estructura en el subespacio modal complejo empleando un volumen de datos muy reducido. De este modo se pueden realizar análisis paramétricos variando múltiples parámetros, para obtener una solución muy aproximada al objetivo buscado. Se propone no solo la variación de propiedades locales de masa, rigidez y amortiguamiento sino la adición de grados de libertad a la estructura original para el cálculo de la respuesta tanto permanente como transitoria. Adicionalmente, su facilidad de implementación permite un control exhaustivo sobre las variables del problema y la implementación de mejoras como diferentes formas de obtención de los autovalores o la eliminación de las limitaciones de amortiguamiento en la estructura original. El objetivo del método se puede considerar similar a los que se obtienen al aplicar el método de Guyan u otras técnicas de reducción de modelos empleados en dinámica estructural. Sin embargo, aunque el método permite ser empleado en conjunción con otros para obtener las ventajas de ambos, el presente procedimiento no realiza la condensación del sistema de ecuaciones, sino que emplea la información del sistema de ecuaciones completa estudiando tan solo la respuesta en las variables apropiadas de los puntos de interés para el analista. Dicho interés puede surgir de la necesidad de obtener la respuesta de las grandes estructuras en unos puntos determinados o de la necesidad de modificar la estructura en zonas determinadas para cambiar su comportamiento (respuesta en aceleraciones, velocidades o desplazamientos) ante cargas dinámicas. Por lo tanto, el procedimiento está particularmente indicado para la selección del valor óptimo de varios parámetros en grandes estructuras (del orden de cientos de miles de modos) como pueden ser la localización de elementos introducidos, rigideces, masas o valores de amortiguamientos viscosos en estudios previos en los que diversas soluciones son planteadas y optimizadas, y que en el caso de grandes estructuras, pueden conllevar un número de simulaciones extremadamente elevado para alcanzar la solución óptima. Tras plantear las herramientas necesarias y desarrollar el procedimiento, se propone un caso de estudio para su aplicación al modelo de elementos .nitos del UAV MILANO desarrollado por el Instituto Nacional de Técnica Aeroespacial. A dicha estructura se le imponen ciertos requisitos al incorporar un equipo en aceleraciones en punta de ala izquierda y desplazamientos en punta de ala derecha en presencia de la sustentación producida por una ráfaga continua de viento de forma sinusoidal. La modificación propuesta consiste en la adición de un equipo en la punta de ala izquierda, bien mediante un anclaje rígido, bien unido mediante un sistema de reducción de la respuesta dinámica con propiedades de masa, rigidez y amortiguamiento variables. El estudio de los resultados obtenidos permite determinar la optimización de los parámetros del sistema de atenuación por medio de múltiples análisis dinámicos de forma que se cumplan de la mejor forma posible los requisitos impuestos con la modificación. Se comparan los resultados con los obtenidos mediante el uso de un programa comercial de análisis por el método de los elementos .nitos lográndose soluciones muy aproximadas entre el modelo completo y el reducido. La influencia de diversos factores como son el amortiguamiento modal de la estructura original, el número de modos retenidos en la truncatura o la precisión proporcionada por el barrido en frecuencia se analiza en detalle para, por último, señalar la eficiencia en términos de tiempo y volumen de datos de computación que ofrece el método propuesto en comparación con otras aproximaciones. Por lo tanto, puede concluirse que el método propuesto se considera una opción útil y eficiente para el análisis paramétrico de modificaciones locales en grandes estructuras. ABSTRACT When developing structural design and analysis some projects require dynamic analysis of large finite element models with millions of degrees of freedom which use large size data .les. The analysis complexity and size grow if a parametric analysis is required. This problem has been approached traditionally in several ways: one way is increasing the power and the storage capacity of computer systems involved in the analysis. Other obvious way is reducing the total amount of analyses and their details. Finally, complementary methods to finite element analysis can also be employed in order to limit the number of variables and to reduce the execution time keeping the results as close as possible to the actual behaviour of the structure. Following this third option, we propose a model reduction method that is based in a simplified analysis that supplies a solution for the dynamic response of the structure in the complex modal space using few data. Thereby, parametric analysis can be done varying multiple parameters so as to obtain a solution which complies with the desired objetive. We propose not only mass, stiffness and damping variations, but also addition of degrees of freedom to the original structure in order to calculate the transient and steady-state response. Additionally, the simple implementation of the procedure allows an in-depth control of the problem variables. Furthermore, improvements such as different ways to obtain eigenvectors or to remove damping limitations of the original structure are also possible. The purpose of the procedure is similar to that of using the Guyan or similar model order reduction techniques. However, in our method we do not perform a true model order reduction in the traditional sense. Furthermore, additional gains, which we do not explore herein, can be obtained through the combination of this method with traditional model-order reduction procedures. In our procedure we use the information of the whole system of equations is used but only those nodes of interest to the analyst are processed. That interest comes from the need to obtain the response of the structure at specific locations or from the need to modify the structure at some suitable positions in order to change its behaviour (acceleration, velocity or displacement response) under dynamic loads. Therefore, the procedure is particularly suitable for parametric optimization in large structures with >100000 normal modes such as position of new elements, stiffness, mass and viscous dampings in previous studies where different solutions are devised and optimized, and in the case of large structures, can carry an extremely high number of simulations to get the optimum solution. After the introduction of the required tools and the development of the procedure, a study case is proposed with use the finite element model (FEM) of the MILANO UAV developed by Instituto Nacional de Técnica Aeroespacial. Due to an equipment addition, certain acceleration and displacement requirements on left wing tip and right wing tip, respectively, are imposed. The structure is under a continuous sinusoidal wind gust which produces lift. The proposed modification consists of the addition of an equipment in left wing tip clamped through a rigid attachment or through a dynamic response reduction system with variable properties of mass, stiffness and damping. The analysis of the obtained results allows us to determine the optimized parametric by means of multiple dynamic analyses in a way such that the imposed requirements have been accomplished in the best possible way. The results achieved are compared with results from a commercial finite element analysis software, showing a good correlation. Influence of several factors such as the modal damping of the original structure, the number of modes kept in the modal truncation or the precission given by the frequency sweep is analyzed. Finally, the efficiency of the proposed method is addressed in tems of computational time and data size compared with other approaches. From the analyses performed, we can conclude that the proposed method is a useful and efficient option to perform parametric analysis of possible local modifications in large structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract: Context aware applications, which can adapt their behaviors to changing environments, are attracting more and more attention. To simplify the complexity of developing applications, context aware middleware, which introduces context awareness into the traditional middleware, is highlighted to provide a homogeneous interface involving generic context management solutions. This paper provides a survey of state-of-the-art context aware middleware architectures proposed during the period from 2009 through 2015. First, a preliminary background, such as the principles of context, context awareness, context modelling, and context reasoning, is provided for a comprehensive understanding of context aware middleware. On this basis, an overview of eleven carefully selected middleware architectures is presented and their main features explained. Then, thorough comparisons and analysis of the presented middleware architectures are performed based on technical parameters including architectural style, context abstraction, context reasoning, scalability, fault tolerance, interoperability, service discovery, storage, security & privacy, context awareness level, and cloud-based big data analytics. The analysis shows that there is actually no context aware middleware architecture that complies with all requirements. Finally, challenges are pointed out as open issues for future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We prove global existence and uniqueness of strong solutions to the logarithmic porous medium type equation with fractional diffusion ?tu + (?)1/2 log(1 + u) = 0, posed for x ? R, with nonnegative initial data in some function space of LlogL type. The solutions are shown to become bounded and C? smooth in (x, t) for all positive times. We also reformulate this equation as a transport equation with nonlocal velocity and critical viscosity, a topic of current relevance. Interesting functional inequalities are involved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Theoretical models suggest that overlapping generations, in combination with a temporally fluctuating environment, may allow the persistence of competitors that otherwise would not coexist. Despite extensive theoretical development, this “storage effect” hypothesis has received little empirical attention. Herein I present the first explicit mathematical analysis of the contribution of the storage effect to the dynamics of competing natural populations. In Oneida Lake, NY, data collected over the past 30 years show a striking negative correlation between the water-column densities of two species of suspension-feeding zooplankton, Daphnia galeata mendotae and Daphnia pulicaria. I have demonstrated competition between these two species and have shown that both possess long-lived eggs that establish overlapping generations. Moreover, recruitment to this long-lived stage varies annually, so that both daphnids have years in which they are favored (for recruitment) relative to their competitor. When the long-term population growth rates are calculated both with and without the effects of a variable environment, I show that D. galeata mendotae clearly cannot persist without the environmental variation and prolonged dormancy (i.e., storage effect) whereas D. pulicaria persists through consistently high per capita recruitment to the long-lived stage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Acknowledgements We would like to gratefully acknowledge the data provided by SEPA, Iain Malcolm. Mark Speed, Susan Waldron and many MSS staff helped with sample collection and lab analysis. We thank the European Research Council (project GA 335910 VEWA) for funding and are grateful for the constructive comments provided by three anonymous reviewers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fabry disease is a lysosomal storage disorder caused by a deficiency of the lysosomal enzyme α-galactosidase A (α-gal A). This enzymatic defect results in the accumulation of the glycosphingolipid globotriaosylceramide (Gb3; also referred to as ceramidetrihexoside) throughout the body. To investigate the effects of purified α-gal A, 10 patients with Fabry disease received a single i.v. infusion of one of five escalating dose levels of the enzyme. The objectives of this study were: (i) to evaluate the safety of administered α-gal A, (ii) to assess the pharmacokinetics of i.v.-administered α-gal A in plasma and liver, and (iii) to determine the effect of this replacement enzyme on hepatic, urine sediment and plasma concentrations of Gb3. α-Gal A infusions were well tolerated in all patients. Immunohistochemical staining of liver tissue approximately 2 days after enzyme infusion identified α-gal A in several cell types, including sinusoidal endothelial cells, Kupffer cells, and hepatocytes, suggesting diffuse uptake via the mannose 6-phosphate receptor. The tissue half-life in the liver was greater than 24 hr. After the single dose of α-gal A, nine of the 10 patients had significantly reduced Gb3 levels both in the liver and shed renal tubular epithelial cells in the urine sediment. These data demonstrate that single infusions of α-gal A prepared from transfected human fibroblasts are both safe and biochemically active in patients with Fabry disease. The degree of substrate reduction seen in the study is potentially clinically significant in view of the fact that Gb3 burden in Fabry patients increases gradually over decades. Taken together, these results suggest that enzyme replacement is likely to be an effective therapy for patients with this metabolic disorder.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For many inborn errors of metabolism, early treatment is critical to prevent long-term developmental sequelae. We have used a gene-therapy approach to demonstrate this concept in a murine model of mucopolysaccharidosis type VII (MPS VII). Newborn MPS VII mice received a single intravenous injection with 5.4 × 106 infectious units of recombinant adeno-associated virus encoding the human β-glucuronidase (GUSB) cDNA. Therapeutic levels of GUSB expression were achieved by 1 week of age in liver, heart, lung, spleen, kidney, brain, and retina. GUSB expression persisted in most organs for the 16-week duration of the study at levels sufficient to either reduce or prevent completely lysosomal storage. Of particular significance, neurons, microglia, and meninges of the central nervous system were virtually cleared of disease. In addition, neonatal treatment of MPS VII mice provided access to the central nervous system via an intravenous route, avoiding a more invasive procedure later in life. These data suggest that gene transfer mediated by adeno-associated virus can achieve therapeutically relevant levels of enzyme very early in life and that the rapid growth and differentiation of tissues does not limit long-term expression.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

How colloidal particles interact with each other is one of the key issues that determines our ability to interpret experimental results for phase transitions in colloidal dispersions and our ability to apply colloid science to various industrial processes. The long-accepted theories for answering this question have been challenged by results from recent experiments. Herein we show from Monte-Carlo simulations that there is a short-range attractive force between identical macroions in electrolyte solutions containing divalent counterions. Complementing some recent and related results by others, we present strong evidence of attraction between a pair of spherical macroions in the presence of added salt ions for the conditions where the interacting macroion pair is not affected by any other macroions that may be in the solution. This attractive force follows from the internal-energy contribution of counterion mediation. Contrary to conventional expectations, for charged macroions in an electrolyte solution, the entropic force is repulsive at most solution conditions because of localization of small ions in the vicinity of macroions. Both Derjaguin–Landau–Verwey–Overbeek theory and Sogami–Ise theory fail to describe the attractive interactions found in our simulations; the former predicts only repulsive interaction and the latter predicts a long-range attraction that is too weak and occurs at macroion separations that are too large. Our simulations provide fundamental “data” toward an improved theory for the potential of mean force as required for optimum design of new materials including those containing nanoparticles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The human cerebral cortex is notorious for the depth and irregularity of its convolutions and for its variability from one individual to the next. These complexities of cortical geography have been a chronic impediment to studies of functional specialization in the cortex. In this report, we discuss ways to compensate for the convolutions by using a combination of strategies whose common denominator involves explicit reconstructions of the cortical surface. Surface-based visualization involves reconstructing cortical surfaces and displaying them, along with associated experimental data, in various complementary formats (including three-dimensional native configurations, two-dimensional slices, extensively smoothed surfaces, ellipsoidal representations, and cortical flat maps). Generating these representations for the cortex of the Visible Man leads to a surface-based atlas that has important advantages over conventional stereotaxic atlases as a substrate for displaying and analyzing large amounts of experimental data. We illustrate this by showing the relationship between functionally specialized regions and topographically organized areas in human visual cortex. Surface-based warping allows data to be mapped from individual hemispheres to a surface-based atlas while respecting surface topology, improving registration of identifiable landmarks, and minimizing unwanted distortions. Surface-based warping also can aid in comparisons between species, which we illustrate by warping a macaque flat map to match the shape of a human flat map. Collectively, these approaches will allow more refined analyses of commonalities as well as individual differences in the functional organization of primate cerebral cortex.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Function of the maize (Zea mays) gene sugary1 (su1) is required for normal starch biosynthesis in endosperm. Homozygous su1- mutant endosperms accumulate a highly branched polysaccharide, phytoglycogen, at the expense of the normal branched component of starch, amylopectin. These data suggest that both branched polysaccharides share a common precursor, and that the product of the su1 gene, designated SU1, participates in kernel starch biosynthesis. SU1 is similar in sequence to α-(1→6) glucan hydrolases (starch-debranching enzymes [DBEs]). Specific antibodies were produced and used to demonstrate that SU1 is a 79-kD protein that accumulates in endosperm coincident with the time of starch biosynthesis. Nearly full-length SU1 was expressed in Escherichia coli and purified to apparent homogeneity. Two biochemical assays confirmed that SU1 hydrolyzes α-(1→6) linkages in branched polysaccharides. Determination of the specific activity of SU1 toward various substrates enabled its classification as an isoamylase. Previous studies had shown, however, that su1- mutant endosperms are deficient in a different type of DBE, a pullulanase (or R enzyme). Immunoblot analyses revealed that both SU1 and a protein detected by antibodies specific for the rice (Oryza sativa) R enzyme are missing from su1- mutant kernels. These data support the hypothesis that DBEs are directly involved in starch biosynthesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The etiolated germination process of oilseed plants is characterized by the mobilization of storage lipids, which serve as a major carbon source for the seedling. We found that during early stages of germination in cucumber, a lipoxygenase (linoleate: oxygen oxidoreductase, EC 1.13.11.12) form is induced that is capable of oxygenating the esterified fatty acids located in the lipid-storage organelles, the so-called lipid bodies. Large amounts of esterified (13S)-hydroxy-(9Z,11E)-octadecadienoic acid were detected in the lipid bodies, whereas only traces of other oxygenated fatty acid isomers were found. This specific product pattern confirms the in vivo action of this lipoxygenase form during germination. Lipid fractionation studies of lipid bodies indicated the presence of lipoxygenase products both in the storage triacylglycerols and, to a higher extent, in the phospholipids surrounding the lipid stores as a monolayer. The degree of oxygenation of the storage lipids increased drastically during the time course of germination. We show that oxygenated fatty acids are preferentially cleaved from the lipid bodies and are subsequently released into the cytoplasm. We suggest that they may serve as substrate for beta-oxidation. These data suggest that during the etiolated germination, a lipoxygenase initiates the mobilization of storage lipids. The possible mechanisms of this implication are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Acknowledgements. This work is dedicated to the memory of Andrés Pérez-Estaún, brilliant scientist, colleague, and friend. The authors sincerely thank Ian Ferguson and an anonymous reviewer for their useful comments on the manuscript. Xènia Ogaya is currently supported in the Dublin Institute for Advanced Studies by a Science Foundation Ireland grant IRECCSEM (SFI grant 12/IP/1313). Juan Alcalde is funded by NERC grant NE/M007251/1, on interpretational uncertainty. Juanjo Ledo, Pilar Queralt and Alex Marcuello thank Ministerio de Economía y Competitividad and EU Feder Funds through grant CGL2014- 54118-C2-1-R. Funding for this Project has been partially provided by the Spanish Ministry of Industry, Tourism and Trade, through the CIUDEN-CSIC-Inst. Jaume Almera agreement (ALM-09-027: Characterization, Development and Validation of Seismic Techniques applied to CO2 Geological Storage Sites), the CIUDEN-Fundació Bosch i Gimpera agreement (ALM-09-009 Development and Adaptation of Electromagnetic techniques: Characterisation of Storage Sites) and the project PIERCO2 (Progress In Electromagnetic Research for CO2 geological reservoirs CGL2009-07604). The CIUDEN project is co-financed by the European Union through the Technological Development Plant of Compostilla OXYCFB300 Project (European Energy Programme for Recovery).