955 resultados para Minkowski Sum of Sets


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A continuous age model for the brief climate excursion at the Paleocene-Eocene boundary has been constructed by assuming a constant flux of extraterrestrial 3He (3He[ET]) to the seafloor. 3He[ET] measurements from ODP Site 690 provide quantitative evidence for the rapid onset (of global warming and of the associated disturbance to the Earth's surficial carbon budget at this time. These observations support astronomically calibrated age models indicating extremely rapid release of isotopically light carbon, possibly from seafloor methane hydrate, as the proximal cause of the event. However, the 3He[ET] technique indicates a previously unrecognized and extreme increase in sedimentation rate coincident with the return of climate proxies to pre-event values. The 3He[ET]-based age model thus suggests a far more rapid recovery from the climatic perturbation than previously proposed or predicted on the basis of the modern carbon cycle, and so may indicate additional or accelerated mechanisms of carbon removal from the ocean-atmosphere system during this period. 3He[ET] was also measured at ODP Site 1051 to test the validity of the Site 690 chronology. Comparison of these data sets seems to require removal of several tens of kyr of sediment within the climatic excursion at Site 1051, an observation consistent with sediment structures and previous age modeling efforts. The Site 1051 age model shows a ~30 kyr period in which climate proxies return toward pre-event values, after which they remain invariant for ~80 kyr. If this rise represents the recovery interval identified at Site 690, then the 3HeET-based age models of the two sites are in good agreement. However, alternative interpretations are possible, and work on less disrupted sites is required to evaluate the reliability of the proposed new chronology of the climate excursion. Regardless of these details, this work shows that the 3HeET technique can provide useful independent evidence for the development and testing of astronomically calibrated age models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cuando se trata de Rem Koolhaas, su espejo no refleja una sola imagen sino múltiples, es un prisma poliédrico. Su espejo nos devuelve el Rem mediático, el intelectual, el conceptualizador, el constructor, el analista, el periodista, el actor... En el caso de esta investigación, fijamos el punto de mira en el Rem COMUNICADOR. “Rem a los dos lados del espejo” se enmarca en una investigación sobre los medios de comunicación de arquitectura, su reflejo en la producción arquitectónica y viceversa. Se trata de llegar a discernir si comunicación y producción arquitectónica colisionan y confluyen en el caso de grandes comunicadores como Rem Koolhaas, si el mensaje y el medio transmisor adquieren las mismas cualidades. Centrándose en la figura de Rem Koolhaas, la tesis aborda la evolución de su faceta comunicativa y las transformaciones sucesivas en el campo de la comunicación arquitectónica, en paralelo a su evolución conceptual a lo largo de su trayectoria. La investigación, por tanto, no se centra tanto en su componente teórica o en la práctica arquitectónica de OMA, sino en la exposición de su producción al mundo, especialmente a través de sus ensayos y libros. “Delirious New York” y “SMLXL” son un reflejo del momento conceptual en que se inscriben, y contienen mucha información sobre los referentes gráficos que irremediablemente han influido en su composición. Especialmente, la aparición de “SMLXL” supuso un revulsivo para el mundo de la comunicación arquitectónica, porque puso el foco sobre la importancia de dejar atrás un discurso narrativo linea y unifocal, para afrontar la comunicación barajando múltiples variables, y aproximaciones, en un proceso similar al desarrollo de un proyecto de arquitectura. Presenta un diseño muy novedoso y una edición extremadamente cuidada, que atiende a parámetros mucho más ambiciosos que los meramente narrativos. Profundiza en la necesidad de una temática global, planteando cuál es la aproximación más apropiada para cada uno de los proyectos que describe, transmitiendo al lector una percepción más allá de lo estrictamente visual, más próximo a lo sensorial. Además, su enorme repercusión a nivel internacional y el gran interés que despertó (no solamente entre los arquitectos, sino también entre diseñadores gráficos, publicistas, personas provenientes de todo tipo de tendencias artísticas y público en general), provocó la globalización del fenómeno de las publicaciones arquitectónicas y puso de manifiesto la importancia de la comunicación como una disciplina en sí misma, dentro de la producción arquitectónica en la era actual. A pesar de la importancia de “SMLXL” a todos los niveles, la presente tesis plantea que, donde realmente se culmina esa experiencia comunicativa, es en “Content”, al incluir nuevos parámetros relacionados con la fusión conceptual de continente y contenido. Es en esta publicación donde el objeto de la comunicación y la expresión de la misma se convierten en un único elemento, que se rige por leyes similares. En este caso, la ley fundamental es la aplicación hasta sus máximas consecuencias de la “cultura de la congestión”, tanto en el mensaje como en el medio, generando lo que hemos convenido en denominar “comunicación congestiva”. Esta concepción deviene en que necesariamente se materialice como un producto efímero, desechable, casi virtual, porque responde a las condiciones de un momento muy concreto y específico y fuera de ese contexto pierde su significación.. La “cultura de la congestión” empieza a surgir en los planteamientos de Koolhaas en la Architectural Association School of Architecture de Londres, bajo la tutela de Elia Zenghelis. Posteriormente se desarrolla en su manifiesto retroactivo sobre Manhattan, “Delirious New York”, donde declara la guerra abierta al urbanismo del movimiento moderno y afirma que la ciudad realmente contemporánea es aquella que es fruto de un desarrollo no planificado, hiperdensa y posible gracias a los avances tecnológicos de su era. Finalmente comienza a materializarse en la Diploma Unit 9 de la AA, donde entra como profesor en 1975, dejando una huella indeleble en las generaciones posteriores de arquitectos que pasaron dicha unidad. Rem Koolhaas es ante todo un intelectual y por ello, todo el constructo teórico sobre la metrópolis comienza a reflejarse en su obra a través de OMA desde el comienzo de su producción. Podemos decir a grandes rasgos que su carrera está marcada por dos hitos históricos fundamentales que determinan tres etapas diferenciadas en su producción. En sus primeros años de profesión, Koolhaas sigue fascinado por la metrópolis urbana y la aplicación del método paranoico crítico a su producción arquitectónica. Es un arquitecto profundamente surrealista. Entiende este método como una estrategia de conocimiento y aproximación al mundo que le rodea: “dejar salir el inconsciente pero sostenerlo con las muletas de la racionalidad”. Pero lo que en realidad le interesa es su aplicación a la gran escala, el “Bigness”, y por ello, participa en proyectos muy ambiciosos de los que surgen conceptos que, más allá de resultar premiados o no, han dejado una huella ideológica en el devenir de la arquitectura. Entre estos proyectos, cabe destacar su propuesta para el Parque de la Villette o la Très Grande Bibliotèque de París. Sus proyectos de esta época destilan una gran carga conceptual, que devienen en unos interiores sorprendentes pero una apariencia exterior sobria o incluso podríamos decir "povera", por el uso de materiales efímeros, poco habituales en la macro-arquitectura hasta ese momento. Súbitamente, en 1997, explotó el denominado “Efecto Bilbao”, de la mano de Frank Gehry (1). El Museo Guggenheim de Bilbao, con su espectacularidad, sus formas pregnantes e imposibles, impacta al mundo. Nace la era de la “Arquitectura del Espectáculo”; la transformación de la ciudad a través de ICONOS que actúen como nodos de atracción y concentración en torno a los cuales supuestamente se revitaliza la actividad económica, cultural y sociopolítica de la ciudad, como si a través de un único gesto se pudieran regenerar todos los tejidos internos de la urbe. Rem Koolhaas comprende rápidamente que la aproximación a la ciudad ha cambiado y, sobre todo, el mercado. En el mundo de la globalización, la única manera de llegar a materializar el “Bigness”, es encerrando sus ejercicios intelectuales en formas pregnantes, bellas, icónicas, espectaculares. Koolhaas encuentra su marca personal en la estética “Stealth”, proveniente de los aviones de combate facetados para evitar los radares, elaborados en los años 80. De esta época surgen proyectos como la Casa da Música de Oporto o la Biblioteca de Seattle; ambos edificios son iconos facetados, de belleza pregnante, que dejan una huella indeleble en la ciudad y provocan, al igual que el Guggenheim, un cierto efecto de recuperación y revitalización en el entorno en que se asientan, al menos de manera temporal. En cualquier caso, Koolhaas nunca abandona los ejercicios meramente teóricos, pero segrega su actividad en dos: OMA produce aquello que tiene vocación de ser construido y se rige por los parámetros del mercado global y AMO, la otra cara del espejo de Rem, aplica el pensamiento arquitectónico a campos no explorados, sin la dependencia de agentes externos, pudiendo permitirse ser un laboratorio puramente experimental. En este escenario, llega el 11 de septiembre de 2001 y el ataque a las Torres Gemelas de Nueva York tiene efectos devastadores a todos los niveles, significando, en un período de tiempo sorprendentemente corto, un cambio en el orden mundial. Rem Koolhaas da entonces un giro de 180 grados, dirige su mirada hacia China, donde entiende que sus aportaciones tienen un beneficio social más directo que en occidente. (2) Para presentar al mundo su nuevo cambio de rumbo y la creación del “Think Tank” AMO, plantea una gran exposición en la NeueGallerie de Berlín bajo el título de “Content”, experiencia paralela a la edición del libro con el mismo título, que inicialmente nace como “catálogo de la exposición, pero que internamente siempre se concibió como el documento más trascendente en el estudio desde “SMLXL”. Sin embargo, en muchos aspectos se trata de su opuesto: una publicación con formato revista, de tapa blanda, con paginado muy fino, formato de "folleto de supermercado" y contenido hiperdenso. Es un experimento efímero, fugaz, ligero, barato, de “usar y tirar”. De hecho, está fuera de stock, ya no se edita. Probablemente Rem Koolhaas desaprobaría que se hiciera una investigación que pusiera el foco sobre el mismo, porque diez años después de su publicación seguramente opine que su vigencia ha caducado. Sin embargo, muestra con una claridad meridiana el estado conceptual y vital de OMA en el momento de su publicación y representa, además un verdadero hito en la comunicación arquitectónica, un punto de no retorno, el máximo exponente de lo que hemos denominado “comunicación congestiva”. La presente tesis plantea que “Content” contiene la esencia de la mayor aportación de Rem Koolhaas al mundo de la arquitectura: la transformación profunda y definitiva de la comunicación arquitectónica mediante la convergencia del estado conceptual y la transmisión del mismo. Su legado arquitectónico y conceptual ha marcado a todas las generaciones posteriores de manera indeleble. Sus ensayos, sus teorías, sus proyectos y sus edificaciones ya pertenecen a la historia de la arquitectura, sin ninguna duda. Pero es su revisión del concepto de la comunicación en arquitectura lo que ha tenido y tendrá un reflejo inmediato en las generaciones futuras, no solamente en la comunicación sino en su arquitectura, a través de un intercambio biyectivo. El planteamiento a futuro sería determinar qué sucede tras “Content”, tras la hiperdensidad máxima, tras la cultura de la congestión visual; qué es lo que propone Koolhaas y qué se va a plantear también en el mundo de la comunicación arquitectónica. Para ello, estudiaremos en profundidad sus últimos proyectos relacionados con la comunicación, como su propuesta para la Biennale de Arquitectura de Venecia de 2014, su intensa investigación sobre el “Metabolismo” en “Project Japan: Metabolism Talks...”, o la dirección de sus últimos planteamientos territoriales. En los últimos tiempos Rem Koolhaas habla de “Preservación”, de “Sobriedad”, de “Esencialismo”, de “Performance”... El autor intelectual de la cultura de la congestión habla ahora de la “low density”...como no podía ser de otra manera en la otra cara del espejo. En definitiva, el color blanco como suma de todos los colores, todas las longitudes de onda del espectro visible recibidas al tiempo. ABSTRACT When talking about Rem Koolhaas, the mirror does not only reflect one but numerous images: it is nothing but a polyhedral prism. His mirror gives us the image of Rem the media celebrity, the intellectual, the conceptualizer, the builder, the analyst, the journalist, the actor... This research sets the spotlight on Rem the COMMUNICATOR. "Rem on both sides of the mirror" belongs to a research on architectural media, its influence on the architectural production and vice versa. It is aimed at getting to discern whether communication and architectural production collide and converge in the case of great communicators such as Rem Koolhaas, and whether the message and transmission media acquire the same features. Focusing on the figure of Rem Koolhaas, this thesis addresses the evolution of his communicative facet and the successive transformations in the field of architectural communication, parallel to the conceptual evolution he underwent throughout his career. Therefore, this research is not so much focused on his theoretical component or on the OMA’s architectural practice, but on the exhibition of his production to the world, especially through his essays and books. "Delirious New York" and "SMLXL" hold up a mirror to the conceptual moment they are part of, and contain a great deal of information about the graphic references that have inevitably influenced his work. Specially, the launch of "SMLXL" was a salutary shock for the architectural communication world, since it set the spotlight on the importance of leaving a linear and unifocal narrative behind in order to face communication considering multiple variables and approaches, based on a process similar to the development of an architectural project. It offers a very innovative design and an extremely careful editing, which deals with parameters much more ambitious than those merely narrative. It explores the need for a global subject and suggests the most appropriate approach for each of the projects described, giving the reader a closer insight to the sensory that goes beyond what’s strictly visual. In addition, its huge international impact and the great interest shown, not only by architects but also by graphic designers, publishers, people from all kinds of artistic trends and the general public, led to the globalisation of the architectural publications phenomenon and brought the importance of communication as a discipline in itself, within the architectural production in the age at hand, to light. Despite the importance of "SMLXL" at all levels, this thesis suggests that the communication experience really culminates in "Content", for it includes new conceptual parameters associated with the container-content conceptual fusion. It is in this book where the purpose of communication and the expression of such become a single element, ruled by similar laws. In this particular case, the fundamental law is to implement the "culture of congestion" to its extreme consequences in both the message and the media, leading to what we have agreed to refer to as "congestive communication”. This concept leads to its inevitable materialisation into an ephemeral, disposable, almost virtual product, because it meets the conditions of a very concrete and specific time, and outside that context it loses its significance. The "culture of congestion" emerged in Koolhaas’ approaches under the guidance of Elia Zenghelis, in the Architectural Association School of Architecture of London. Subsequently, his retroactive manifesto on Manhattan, "Delirious New York" developed it, waging an all-out war against the modern movement urbanism and maintaining that the really contemporary cities are those hyperdense ones that rise as a result of an unplanned development and thanks to the typical technological advances of their time. Finally it began to materialise in the Diploma Unit 9 of the AA, in which he started lecturing in 1975, leaving an indelible mark on subsequent generations of architects who passed that unit. First and foremost, Rem Koolhaas is an intellectual and, therefore, all the theoretical construct in the metropolis began to be reflected in his work through OMA since the beginnings of his production. Broadly speaking, we can say that his career is influenced by two essential historic events, which determine three different stages in his production. In the early years of his career, Koolhaas was still fascinated by the urban metropolis and the implementation of the paranoiac-critical method to his architectural production. He was then a deeply surreal architect. He understood this method as a knowledge strategy and an approach to the world around him: "let the subconscious out but hold it with the crutches of reasonableness”. However, he was actually interested in its implementation on a broad scale, the "Bigness", and therefore, he took part in ambitious projects that led to the accrual of concepts that, beyond being rewarded, left an ideological impression on the evolution of architecture. These projects included his proposal for the Parc de la Villette or the Très Grande Bibliotèque in Paris. The projects he carried out during this period showed a great conceptual background, which evolved into surprising interiors but a sober, or even "povera", exterior appearance, thanks to the use of ephemeral materials that were atypical in the macro-architecture field until that moment. Suddenly, in 1997, the so-called "Bilbao effect" boomed thanks to Frank Gehry (1). The Guggenheim Museum of Bilbao amazed the world with its spectacular nature and its pregnant and impossible shapes. It was the beginning of the era of “The architecture of spectacle”: the transformation of the city through ICONS that would act as nodes of attraction and gathering, around which the economic, cultural and socio-political activity of the city was supposed to be revitalized, as if through a single gesture all internal tissues of the city could be rebuilt. Rem Koolhaas quickly realized that the approach to the city, and especially to the global market, had changed. In the world of globalisation, the only way to get to materialise such "Bigness" was by keeping his intellectual exercises in pregnant, beautiful, iconic and spectacular shapes. Koolhaas found his personal brand in the Stealth aesthetic, resulting from the eighties American combat aircrafts whose shape was faceted in order to avoid radars. Projects such as the Casa da Música in Porto or the Seattle Library date from this period; both buildings are faceted icons of pregnant beauty that left an indelible mark on the city and caused, like the Guggenheim, some degree of recovery and revitalization on the environment in which they were based, at least temporarily. In any case, Koolhaas never gave the merely theoretical exercises up, but he segregated his work in two: OMA produced what was destined to be built and ruled by the parameters of the global market and AMO, Rem’s other side of the mirror, applied the architectural thought in unexplored fields, notwithstanding external agents and being able to work as a purely experimental laboratory. In light of this backdrop, September 11th 2001 came and the attacks on the Twin Towers in New York had devastating effects at all levels, leading to a change in the world order, in a surprisingly short period of time. Rem Koolhaas made a 180° turn directing his vision towards China, where he believed his contributions would have a more direct social benefit than in the Western world. (2) In order to introduce his new course of direction and the creation of the AMO "Think Tank", he planned a major exhibition in the Neue Nationalgalerie of Berlin under the title "Content", in parallel with edition of the book with the same title, which was at first the "exhibition catalog” but, deep down, was always conceived as the most important document of the Office since "SMLXL". However, in many ways it was just the opposite: a publication characterised by its magazine format, soft cover, very fine paging, "supermarket brochure" form and hyperdense content. It was an ephemeral, brief, light, cheap and "disposable" experiment. In fact, it is currently out of stock and out of print. Rem Koolhaas would probably disapprove of a research that sets the spotlight on him, for he would probably say that his validity has expired given that it has been ten years since its publication. However, it shows OMA’s conceptual and vital status at the time of its publication with crystalline clarity and it is also a true milestone in architectural communication. A point of no return. The epitome of the so-called "congestive communication ". This thesis suggests that "Content" contains the essence of Rem Koolhaas’ greatest contribution to the world of architecture: the deep and definitive transformation of architectural communication through the convergence of the conceptual state and the transmission thereof. His architectural and conceptual legacy has left an indelible mark on all subsequent generations. There is no doubt his essays, theories, projects and buildings already belong to the history of architecture. But it is his review on the concept of communication in architecture that has had and shall have an immediate influence on future generations, not only in their communication but also in their architecture, through a bijective exchange. Future approaches should try to determine what happens after "Content", after the maximum hyperdensity, after the visual culture of congestion; what shall Koolhaas suggest as well as what shall happen in the world of architectural communication. To this end, we shall study his latest communication-related projects, such as the design of the Venetian Architecture Biennale in 2014, his intensive research on the "Metabolism" in "Project Japan: Metabolism Talks ...", or the course of his latest territorial approaches in depth. Most recently, Rem Koolhaas has talked about "Preservation", "Sobriety" of "Essentialism", "Performance", etc. The mastermind of the culture of congestion now speaks of the "low density"... as it could not be otherwise, on the other side of the mirror. Summarizing, the white color as the sum of all colors; all wavelengths of the visible spectrum received at the same time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present an analysis of a pointed 141 ks Chandra high-resolution transmission gratings observation of the Be X-ray emitting star HD110432, a prominent member of the γ Cas analogs. This observation represents the first high-resolution spectrum taken for this source as well as the longest uninterrupted observation of any γ Cas analog. The Chandra light curve shows a high variability but its analysis fails to detect any coherent periodicity up to a frequency of 0.05 Hz. Hardness ratio versus intensity analyses demonstrate that the relative contributions of the [1.5-3] Å, [3-6] Å, and [6-16] Å energy bands to the total flux change rapidly in the short term. The analysis of the Chandra High Energy Transmission Grating (HETG) spectrum shows that, to correctly describe the spectrum, three model components are needed. Two of those components are optically thin thermal plasmas of different temperatures (kT ≈ 8-9 and 0.2-0.3 keV, respectively) described by the models vmekal or bvapec. The Fe abundance in each of these two components appears equal within the errors and is slightly subsolar with Z ≈ 0.75 Z ☉. The bvapec model better describes the Fe L transitions, although it cannot fit well the Na XI Lyα line at 10.02 Å, which appears to be overabundant. Two different models seem to describe well the third component. One possibility is a third hot optically thin thermal plasma at kT = 16-21 keV with an Fe abundance Z ≈ 0.3 Z ☉, definitely smaller than for the other two thermal components. Furthermore, the bvapec model describes well the Fe K shell transitions because it accounts for the turbulence broadening of the Fe XXV and Fe XXVI lines with a v turb ≈ 1200 km s–1. These two lines, contributed mainly by the hot thermal plasma, are significantly wider than the Fe Kα line whose FWHM < 5 mÅ is not resolved by Chandra. Alternatively, the third component can be described by a power law with a photon index of Γ = 1.56. In either case, the Chandra HETG spectrum establishes that each one of these components must be modified by distinct absorption columns. The analysis of a noncontemporaneous 25 ks Suzaku observation shows the presence of a hard tail extending up to at least 33 keV. The Suzaku spectrum is described with the sum of two components: an optically thin thermal plasma at kT ≈ 9 keV and Z ≈ 0.74 Z ☉, and a very hot second plasma with kT ≈ 33 keV or, alternatively, a power law with photon index of Γ = 1.58. In either case, each one of the two components must be affected by different absorption columns. Therefore, the kT = 8-9 keV component is definitely needed while the nature of the harder emission cannot be unambiguously established with the present data sets. The analysis of the Si XIII and S XV He-like triplets present in the Chandra spectrum points to a very dense (ne ~ 1013 cm–3) plasma located either close to the stellar surface (r < 3R *) of the Be star or, alternatively, very close (r ~ 1.5R WD) to the surface of a (hypothetical) white dwarf companion. We argue, however, that the available data support the first scenario.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper contributes to the literature on the intra-firm diffusion of innovations by investigating the factors that affect the firm’s decision to adopt and use sets of complementary innovations. We define complementary innovations those innovations whose joint use generates super additive gains, i.e. the gain from the joint adoption is higher than the sum of the gains derived from the adoption of each innovation in isolation. From a theoretical perspective, we present a simple decision model, whereby the firm decides ‘whether’ and ‘how much’ to invest in each of the innovations under investigation based upon the expected profit gain from each possible combination of adoption and use. The model shows how the extent of complementarity among the innovations can affect the firm’s profit gains and therefore the likelihood that the firm will adopt these innovations jointly, rather than individually. From an empirical perspective, we focus on four sets of management practices, namely operating (OMP), monitoring (MMP), targets (TMP) and incentives (IMP) management practices. We show that these sets of practices, although to a different extent, are complementary to each other. Then, we construct a synthetic indicator of the depth of their use. The resulting intra-firm index is built to reflect not only the number of practices adopted but also the depth of their individual use and the extent of their complementarity. The empirical testing of the decision model is carried out using the evidence from the adoption behaviour of a sample of 1,238 UK establishments present in the 2004 Workplace Employment Relations Survey (WERS). Our empirical results show that the intra-firm profitability based model is a good model in that it can explain more of the variability of joint adoption than models based upon the variability of adoption and use of individual practices. We also investigate whether a number of firm specific and market characteristics by affecting the size of the gains (which the joint adoption of innovations can generate) may drive the intensity of use of the four innovations. We find that establishment size, whether foreign owned, whether exposed to an international market and the degree of homogeneity of the final product are important determinants of the intensity of the joint adoption of the four innovations. Most importantly, our results point out that the factors that the economics of innovation literature has been showing to affect the intensity of use of a technological innovation do also affect the intensity of use of sets of innovative management practices. However, they can explain only a small part of the diversity of their joint adoption use by the firms in the sample.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

* The research is supported partly by INTAS: 04-77-7173 project, http://www.intas.be

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present data compilation includes dinoflagellates growth rate, grazing rate and gross growth efficiency determined either in the field or in laboratory experiments. From the existing literature, we synthesized all data that we could find on dinoflagellates. Some sources might be missing but none were purposefully ignored. We did not include autotrophic dinoflagellates in the database, but mixotrophic organisms may have been included. This is due to the large uncertainty about which taxa are mixotrophic, heterotrophic or symbiont bearing. Field data on microzooplankton grazing are mostly comprised of grazing rate using the dilution technique with a 24h incubation period. Laboratory grazing and growth data are focused on pelagic ciliates and heterotrophic dinoflagellates. The experiment measured grazing or growth as a function of prey concentration or at saturating prey concentration (maximal grazing rate). When considering every single data point available (each measured rate for a defined predator-prey pair and a certain prey concentration) there is a total of 801 data points for the dinoflagellates, counting experiments that measured growth and grazing simultaneously as 1 data point.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present data compilation includes ciliates growth rate, grazing rate and gross growth efficiency determined either in the field or in laboratory experiments. From the existing literature, we synthesized all data that we could find on cilliate. Some sources might be missing but none were purposefully ignored. Field data on microzooplankton grazing are mostly comprised of grazing rate using the dilution technique with a 24h incubation period. Laboratory grazing and growth data are focused on pelagic ciliates and heterotrophic dinoflagellates. The experiment measured grazing or growth as a function of prey concentration or at saturating prey concentration (maximal grazing rate). When considering every single data point available (each measured rate for a defined predator-prey pair and a certain prey concentration) there is a total of 1485 data points for the ciliates, counting experiments that measured growth and grazing simultaneously as 1 data point.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, the development of the photovoltaic (PV) technology is consolidated as a source of renewable energy. The research in the topic of maximum improvement on the energy efficiency of the PV plants is today a major challenge. The main requirement for this purpose is to know the performance of each of the PV modules that integrate the PV field in real time. In this respect, a PLC communications based Smart Monitoring and Communications Module, which is able to monitor at PV level their operating parameters, has been developed at the University of Malaga. With this device you can check if any of the panels is suffering any type of overriding performance, due to a malfunction or partial shadowing of its surface. Since these fluctuations in electricity production from a single panel affect the overall sum of all panels that conform a string, it is necessary to isolate the problem and modify the routes of energy through alternative paths in case of PV panels array configuration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several components of the metabolic syndrome, particularly diabetes and cardiovascular disease, are known to be oxidative stress-related conditions and there is research to suggest that antioxidant nutrients may play a protective role in these conditions. Carotenoids are compounds derived primarily from plants and several have been shown to be potent antioxidant nutrients. The aim of this study was to examine the associations between metabolic syndrome status and major serum carotenoids in adult Australians. Data on the presence of the metabolic syndrome, based on International Diabetes Federation 2005 criteria, were collected from 1523 adults aged 25 years and over in six randomly selected urban centers in Queensland, Australia, using a cross-sectional study design. Weight, height, BMI, waist circumference, blood pressure, fasting and 2-hour blood glucose and lipids were determined, as well as five serum carotenoids. Mean serum alpha-carotene, beta-carotene and the sum of the five carotenoid concentrations were significantly lower (p<0.05) in persons with the metabolic syndrome (after adjusting for age, sex, education, BMI status, alcohol intake, smoking, physical activity status and vitamin/mineral use) than persons without the syndrome. Alpha, beta and total carotenoids also decreased significantly (p<0.05) with increased number of components of the metabolic syndrome, after adjusting for these confounders. These differences were significant among former smokers and non-smokers, but not in current smokers. Low concentrations of serum alpha-carotene, beta-carotene and the sum of five carotenoids appear to be associated with metabolic syndrome status. Additional research, particularly longitudinal studies, may help to determine if these associations are causally related to the metabolic syndrome, or are a result of the pathologies of the syndrome.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pooled serum samples collected from 8132 residents in 2002/03 and 2004/05 were analyzed to assess human polybrominated diphenyl ether (PBDE) concentrations from specified strata of the Australian population. The strata were defined by age (0−4 years, 5−15 years, < 16 years, 16−30 years, 31−45 years, 46−60 years, and >60 years); region; and gender. For both time periods, infants and older children had substantially higher PBDE concentrations than adults. For samples collected in 2004/05, the mean ± standard deviation ΣPBDE (sum of the homologue groups for the mono-, di-, tri-, tetra-, penta-, hexa-, hepta-, octa-, nona-, and deca-BDEs) concentrations for 0−4 and 5−15 years were 73 ± 7 and 29 ± 7 ng g−1 lipid, respectively, while for all adults >16 years, the mean concentration was lower at 18 ± 5 ng g−1 lipid. A similar trend was observed for the samples collected in 2002/03, with the mean ΣPBDE concentration for children <16 years being 28 ± 8 ng g−1 lipid and for the adults >16 years, 15 ± 5 ng g−1 lipid. No regional or gender specific differences were observed. Measured data were compared with a model that we developed to incorporate the primary known exposure pathways (food, air, dust, breast milk) and clearance (half-life) data. The model was used to predict PBDE concentration trends and indicated that the elevated concentrations in infants were primarily due to maternal transfer and breast milk consumption with inhalation and ingestion of dust making a comparatively lower contribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Polybrominated diphenyl ethers (PBDEs) are used as flame retardants in many products and have been detected in human samples worldwide. Limited data show that concentrations are elevated in young children. Objectives: We investigated the association between PBDEs and age with an emphasis on young children from Australia in 2006–2007. Methods: We collected human blood serum samples (n = 2,420), which we stratified by age and sex and pooled for analysis of PBDEs. Results: The sum of BDE-47, -99, -100, and -153 concentrations (Σ4PBDE) increased from 0–0.5 years (mean ± SD, 14 ± 3.4 ng/g lipid) to peak at 2.6–3 years (51 ± 36 ng/g lipid; p < 0.001) and then decreased until 31–45 years (9.9 ± 1.6 ng/g lipid). We observed no further significant decrease among ages 31–45, 45–60 (p = 0.964), or > 60 years (p = 0.894). The mean Σ4PBDE concentration in cord blood (24 ± 14 ng/g lipid) did not differ significantly from that in adult serum at ages 15–30 (p = 0.198) or 31–45 years (p = 0.140). We found no temporal trend when we compared the present results with Australian PBDE data from 2002–2005. PBDE concentrations were higher in males than in females; however, this difference reached statistical significance only for BDE-153 (p = 0.05). Conclusions: The observed peak concentration at 2.6–3 years of age is later than the period when breast-feeding is typically ceased. This suggests that in addition to the exposure via human milk, young children have higher exposure to these chemicals and/or a lower capacity to eliminate them. Key words: Australia, children, cord blood, human blood serum, PBDEs, polybrominated diphenyl ethers. Environ Health Perspect 117:1461–1465 (2009). doi:10.1289/ehp.0900596

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Applications of stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics, industrial automation and stereomicroscopy. A key issue in stereo vision is that of image matching, or identifying corresponding points in a stereo pair. The difference in the positions of corresponding points in image coordinates is termed the parallax or disparity. When the orientation of the two cameras is known, corresponding points may be projected back to find the location of the original object point in world coordinates. Matching techniques are typically categorised according to the nature of the matching primitives they use and the matching strategy they employ. This report provides a detailed taxonomy of image matching techniques, including area based, transform based, feature based, phase based, hybrid, relaxation based, dynamic programming and object space methods. A number of area based matching metrics as well as the rank and census transforms were implemented, in order to investigate their suitability for a real-time stereo sensor for mining automation applications. The requirements of this sensor were speed, robustness, and the ability to produce a dense depth map. The Sum of Absolute Differences matching metric was the least computationally expensive; however, this metric was the most sensitive to radiometric distortion. Metrics such as the Zero Mean Sum of Absolute Differences and Normalised Cross Correlation were the most robust to this type of distortion but introduced additional computational complexity. The rank and census transforms were found to be robust to radiometric distortion, in addition to having low computational complexity. They are therefore prime candidates for a matching algorithm for a stereo sensor for real-time mining applications. A number of issues came to light during this investigation which may merit further work. These include devising a means to evaluate and compare disparity results of different matching algorithms, and finding a method of assigning a level of confidence to a match. Another issue of interest is the possibility of statistically combining the results of different matching algorithms, in order to improve robustness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Freeways are divided roadways designed to facilitate the uninterrupted movement of motor vehicles. However, many freeways now experience demand flows in excess of capacity, leading to recurrent congestion. The Highway Capacity Manual (TRB, 1994) uses empirical macroscopic relationships between speed, flow and density to quantify freeway operations and performance. Capacity may be predicted as the maximum uncongested flow achievable. Although they are effective tools for design and analysis, macroscopic models lack an understanding of the nature of processes taking place in the system. Szwed and Smith (1972, 1974) and Makigami and Matsuo (1990) have shown that microscopic modelling is also applicable to freeway operations. Such models facilitate an understanding of the processes whilst providing for the assessment of performance, through measures of capacity and delay. However, these models are limited to only a few circumstances. The aim of this study was to produce more comprehensive and practical microscopic models. These models were required to accurately portray the mechanisms of freeway operations at the specific locations under consideration. The models needed to be able to be calibrated using data acquired at these locations. The output of the models needed to be able to be validated with data acquired at these sites. Therefore, the outputs should be truly descriptive of the performance of the facility. A theoretical basis needed to underlie the form of these models, rather than empiricism, which is the case for the macroscopic models currently used. And the models needed to be adaptable to variable operating conditions, so that they may be applied, where possible, to other similar systems and facilities. It was not possible to produce a stand-alone model which is applicable to all facilities and locations, in this single study, however the scene has been set for the application of the models to a much broader range of operating conditions. Opportunities for further development of the models were identified, and procedures provided for the calibration and validation of the models to a wide range of conditions. The models developed, do however, have limitations in their applicability. Only uncongested operations were studied and represented. Driver behaviour in Brisbane was applied to the models. Different mechanisms are likely in other locations due to variability in road rules and driving cultures. Not all manoeuvres evident were modelled. Some unusual manoeuvres were considered unwarranted to model. However the models developed contain the principal processes of freeway operations, merging and lane changing. Gap acceptance theory was applied to these critical operations to assess freeway performance. Gap acceptance theory was found to be applicable to merging, however the major stream, the kerb lane traffic, exercises only a limited priority over the minor stream, the on-ramp traffic. Theory was established to account for this activity. Kerb lane drivers were also found to change to the median lane where possible, to assist coincident mergers. The net limited priority model accounts for this by predicting a reduced major stream flow rate, which excludes lane changers. Cowan's M3 model as calibrated for both streams. On-ramp and total upstream flow are required as input. Relationships between proportion of headways greater than 1 s and flow differed for on-ramps where traffic leaves signalised intersections and unsignalised intersections. Constant departure onramp metering was also modelled. Minimum follow-on times of 1 to 1.2 s were calibrated. Critical gaps were shown to lie between the minimum follow-on time, and the sum of the minimum follow-on time and the 1 s minimum headway. Limited priority capacity and other boundary relationships were established by Troutbeck (1995). The minimum average minor stream delay and corresponding proportion of drivers delayed were quantified theoretically in this study. A simulation model was constructed to predict intermediate minor and major stream delays across all minor and major stream flows. Pseudo-empirical relationships were established to predict average delays. Major stream average delays are limited to 0.5 s, insignificant compared with minor stream delay, which reach infinity at capacity. Minor stream delays were shown to be less when unsignalised intersections are located upstream of on-ramps than signalised intersections, and less still when ramp metering is installed. Smaller delays correspond to improved merge area performance. A more tangible performance measure, the distribution of distances required to merge, was established by including design speeds. This distribution can be measured to validate the model. Merging probabilities can be predicted for given taper lengths, a most useful performance measure. This model was also shown to be applicable to lane changing. Tolerable limits to merging probabilities require calibration. From these, practical capacities can be estimated. Further calibration is required of traffic inputs, critical gap and minimum follow-on time, for both merging and lane changing. A general relationship to predict proportion of drivers delayed requires development. These models can then be used to complement existing macroscopic models to assess performance, and provide further insight into the nature of operations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.