910 resultados para Carolina Maria, consort of Ferdinand I, King of the Two Sicilies, 1752-1814.


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Solar infrared colors provide powerful constraints on the stellar effective temperature scale, but they must be measured with both accuracy and precision in order to do so. We fulfill this requirement by using line-depth ratios to derive in a model-independent way the infrared colors of the Sun, and we use the latter to test the zero point of the Casagrande et al. effective temperature scale, confirming its accuracy. Solar colors in the widely used Two Micron All Sky Survey (2MASS) JHK(s) and WISE W1-4 systems are provided: (V - J)(circle dot) = 1.198, (V - H)(circle dot) = 1.484, (V - K-s)(circle dot) = 1.560, (J - H)(circle dot) = 0.286, (J - K-s)(circle dot) = 0.362, (H - K-s)(circle dot) = 0.076, (V - W1)(circle dot) = 1.608, (V - W2)(circle dot) = 1.563, (V - W3)(circle dot) = 1.552, and (V - W4)(circle dot) = 1.604. A cross-check of the effective temperatures derived implementing 2MASS or WISE magnitudes in the infrared flux method confirms that the absolute calibration of the two systems agrees within the errors, possibly suggesting a 1% offset between the two, thus validating extant near-and mid-infrared absolute calibrations. While 2MASS magnitudes are usually well suited to derive T-eff, we find that a number of bright, solar-like stars exhibit anomalous WISE colors. In most cases, this effect is spurious and can be attributed to lower-quality measurements, although for a couple of objects (3%+/- 2% of the total sample) it might be real, and may hint at the presence of warm/hot debris disks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Schroeder's backward integration method is the most used method to extract the decay curve of an acoustic impulse response and to calculate the reverberation time from this curve. In the literature the limits and the possible improvements of this method are widely discussed. In this work a new method is proposed for the evaluation of the energy decay curve. The new method has been implemented in a Matlab toolbox. Its performance has been tested versus the most accredited literature method. The values of EDT and reverberation time extracted from the energy decay curves calculated with both methods have been compared in terms of the values themselves and in terms of their statistical representativeness. The main case study consists of nine Italian historical theatres in which acoustical measurements were performed. The comparison of the two extraction methods has also been applied to a critical case, i.e. the structural impulse responses of some building elements. The comparison underlines that both methods return a comparable value of the T30. Decreasing the range of evaluation, they reveal increasing differences; in particular, the main differences are in the first part of the decay, where the EDT is evaluated. This is a consequence of the fact that the new method returns a “locally" defined energy decay curve, whereas the Schroeder's method accumulates energy from the tail to the beginning of the impulse response. Another characteristic of the new method for the energy decay extraction curve is its independence on the background noise estimation. Finally, a statistical analysis is performed on the T30 and EDT values calculated from the impulse responses measurements in the Italian historical theatres. The aim of this evaluation is to know whether a subset of measurements could be considered representative for a complete characterization of these opera houses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation examines how some fundamental events of the history of Ireland emerge through the art of the mural. It is divided into three chapters. The first chapter opens with a brief presentation of the mural as a form of art with a semiotic and sociological function, with a particular focus on the socio-political importance it has had and still has today in Ireland, where murals are a significant means of expressing ideals, protest and commemoration. A part of this chapter also provides data about the number of murals and their location, with a particular focus on the two cities of Belfast and Derry. This first chapter ends with the presentation of an initiative put forth by the Arts Council of Northern Ireland, called "Building Peace through the Arts: Re-Imaging Communities", and questions its implementation on the Irish soil. The second chapter provides a history of the murals in Northern Ireland, from the Unionist's early depictions of King Billy in occasion of the 12 July annual celebrations to the Republican response. This will be supported by an explanation of the two events that triggered the start of the mural painting for both factions: the Battle of the Boyne for the Loyalists and the 1981 hunger strike for the Republicans. In the third and last chapter of this dissertation, a key of the main themes, symbols, acronyms and dominant colours which can be found in Loyalist and Republican murals is provided. Furthermore, one mural for each faction is looked at more closely, with an analysis of the symbols which are present in it.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This project intertwines philosophical and historico-literary themes, taking as its starting point the concept of tragic consciousness inherent in the epoch of classicism. The research work makes use of ontological categories in order to describe the underlying principles of the image of the world which was created in philosophical and scientific theories of the 17th century as well as in contemporary drama. Using these categories brought Mr. Vilk to the conclusion that the classical picture of the world implied a certain dualism; not the Manichaean division between light and darkness but the discrimination between nature and absolute being, i.e. God. Mr. Vilk begins with an examination of the philosophical essence of French classical theatre of the XVII and XVIII centuries. The history of French classical tragedy can be divided into three periods: from the mid 17th to early 19th centuries when it triumphed all over France and exerted a powerful influence over almost all European countries; followed by the period of its rejection by the Romantics, who declared classicism to be "artificial and rational"; and finally our own century which has taken a more moderate line. Nevertheless, French classical tragedy has never fully recovered its status. Instead, it is ancient tragedy and the works of Shakespeare that are regarded to be the most adequate embodiment of the tragic. Consequently they still provoke a great number of new interpretations ranging from specialised literary criticism to more philosophical rumination. An important feature of classical tragedy is a system of rules and unities which reveals a hidden ontological structure of the world. The ontological picture of the dramatic world can be described in categories worked out by medieval philosophy - being, essence and existence. The first category is to be understood as a tendency toward permanency and stability (within eternity) connected with this or that fragment of dramatic reality. The second implies a certain set of permanent elements that make up the reality. And the third - existence - should be understood as "an act of being", as a realisation of permanently renewed processes of life. All of these categories can be found in every artistic reality but the accents put on one or another and their interrelations create different ontological perspectives. Mr. Vilk plots the movement of thought, expressed in both philosophical and scientific discourses, away from Aristotle's essential forms, and towards a prioritising of existence, and shows how new forms of literature and drama structured the world according to these evolving requirements. At the same time the world created in classical tragedy fully preserves another ontological paradigm - being - as a fundamental permanence. As far as the tragic hero's motivations are concerned this paradigm is revealed in the dedication of his whole self to some cause, and his oath of fidelity, attitudes which shape his behaviour. It may be the idea of the State, or personal honour, or something borrowed from the emotional sphere, passionate love. Mr. Vilk views the conflicting ambivalence of existence and being, duty as responsibility and duty as fidelity, as underlying the main conflict of classical tragedy of the 17th century. Having plotted the movement of the being/existence duality through its manifestations in 17th century tragedy, Mr. Vilk moves to the 18th century, when tragedy took a philosophical turn. A dualistic view of the world became supplanted by the Enlightenment idea of a natural law, rooted in nature. The main point of tragedy now was to reveal that such conflicts as might take place had an anti-rational nature, that they arose as the result of a kind of superstition caused by social reasons. These themes Mr. Vilk now pursues through Russian dramatists of the 18th and early 19th centuries. He begins with Sumarakov, whose philosophical thought has a religious bias. According to Sumarakov, the dualism of the divineness and naturalness of man is on the one hand an eternal paradox, and on the other, a moral challenge for humans to try to unite the two opposites. His early tragedies are not concerned with social evils or the triumph of natural feelings and human reason, but rather the tragic disharmony in the nature of man and the world. Mr Vilk turns next to the work of Kniazhnin. He is particularly keen to rescue his reputation from the judgements of critics who accuse him of being imitative, and in order to do so, analyses in detail the tragedy "Dido", in which Kniazhnin makes an attempt to revive the image of great heroes and city-founders. Aeneas represents the idea of the "being" of Troy, his destiny is the re-establishment of the city (the future Rome). The moral aspect behind this idea is faithfulness, he devotes himself to Gods. Dido is also the creator of a city, endowed with "natural powers" and abilities, but her creation is lacking internal stability grounded in "being". The unity of the two motives is only achieved through Dido's sacrifice of herself and her city to Aeneus. Mr Vilk's next subject is Kheraskov, whose peculiarity lies in the influence of free-mason mysticism on his work. This section deals with one of the most important philosophical assumptions contained in contemporary free-mason literature of the time - the idea of the trinitarian hierarchy inherent in man and the world: body - soul - spirit, and nature - law - grace. Finally, Mr. Vilk assess the work of Ozerov, the last major Russian tragedian. The tragedies which earned him fame, "Oedipus in Athens", "Fingal" and "Dmitri Donskoi", present a compromise between the Enlightenment's emphasis on harmony and ontological tragic conflict. But it is in "Polixene" that a real meeting of the Russian tradition with the age-old history of the genre takes place. The male and female characters of "Polixene" distinctly express the elements of "being" and "existence". Each of the participants of the conflict possesses some dominant characteristic personifying a certain indispensable part of the moral world, a certain "virtue". But their independent efforts are unable to overcome the ontological gap separating them. The end of the tragedy - Polixene's sacrificial self-immolation - paradoxically combines the glorification of each party involved in the conflict, and their condemnation. The final part of Mr. Vilk's research deals with the influence of "Polixene" upon subsequent dramatic art. In this respect Katenin's "Andromacha", inspired by "Polixene", is important to mention. In "Andromacha" a decisive divergence from the principles of the philosophical tragedy of Russian classicism and the ontology of classicism occurs: a new character appears as an independent personality, directed by his private interest. It was Katenin who was to become the intermediary between Pushkin and classical tragedy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A robust CE method for the simultaneous determination of the enantiomers of ketamine and norketamine in equine plasma is described. It is based upon liquid-liquid extraction of ketamine and norketamine at alkaline pH from 1 mL plasma followed by analysis of the reconstituted extract by CE in the presence of a pH 2.5 Tris-phosphate buffer containing 10 mg/mL highly sulfated beta-CD as chiral selector. Enantiomer plasma levels between 0.04 and 2.5 microg/mL are shown to provide linear calibration graphs. Intraday and interday precisions evaluated from peak area ratios (n = 5) at the lowest calibrator concentration are < 8 and < 14%, respectively. The LOD for all enantiomers is 0.01 microg/mL. After i.v. bolus administration of 2.2 mg/kg racemic ketamine, the assay is demonstrated to provide reliable data for plasma samples of ponies under isoflurane anesthesia, of ponies premedicated with xylazine, and of one horse that received romifidine, L-methadone, guaifenisine, and isoflurane. In animals not premedicated with xylazine, the ketamine N-demethylation is demonstrated to be enantioselective. The concentrations of the two ketamine enantiomers in plasma are equal whereas S-norketamine is found in a larger amount than R-norketamine. In the group receiving xylazine, data obtained do not reveal this stereoselectivity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe the characterization of the herpes simplex virus type 2 (HSV-2) gene encoding infected cell protein 32 (ICP32) and virion protein 19c (VP19c). We also demonstrate that the HSV-1 UL38/ORF.553 open reading frame (ORF), which has been shown to specify a viral protein essential for capsid formation (B. Pertuiset, M. Boccara, J. Cebrian, N. Berthelot, S. Chousterman, F. Puvian-Dutilleul, J. Sisman, and P. Sheldrick, J. Virol. 63: 2169-2179, 1989), must encode the cognate HSV type 1 (HSV-1) ICP32/VP19c protein. The region of the HSV-2 genome deduced to contain the gene specifying ICP32/VP19c was isolated and subcloned, and the nucleotide sequence of 2,158 base pairs of HSV-2 DNA mapping immediately upstream of the gene encoding the large subunit of the viral ribonucleotide reductase was determined. This region of the HSV-2 genome contains a large ORF capable of encoding two related 50,538- and 49,472-molecular-weight polypeptides. Direct evidence that this ORF encodes HSV-2 ICP32/VP19c was provided by immunoblotting experiments that utilized antisera directed against synthetic oligopeptides corresponding to internal portions of the predicted polypeptides encoded by the HSV-2 ORF or antisera directed against a TrpE/HSV-2 ORF fusion protein. The type-common immunoreactivity of the two antisera and comparison of the primary amino acid sequences of the predicted products of the HSV-2 ORF and the equivalent genomic region of HSV-1 provided evidence that the HSV-1 UL38 ORF encodes the HSV-1 ICP32/VP19c. Analysis of the expression of the HSV-1 and HSV-2 ICP32/VP19c cognate proteins indicated that there may be differences in their modes of synthesis. Comparison of the predicted structure of the HSV-2 ICP32/VP19c protein with the structures of related proteins encoded by other herpes viruses suggested that the internal capsid architecture of the herpes family of viruses varies substantially.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aniridia (AN) is a congenital, panocular disorder of the eye characterized by the complete or partial absence of the iris. The disease can occur in both the sporadic and familial forms which, in the latter case, is inherited as an autosomal dominant trait with high penetrance. The objective of this study was to isolate and characterize the genes involved in AN and Sey, and thereby to gain a better understanding of the molecular basis of the two disorders.^ Using a positional cloning strategy, I have approached and cloned from the AN locus in human chromosomal band 11p13 a cDNA that is deleted in two patients with AN. The deletions in these patients overlap by about 70 kb and encompass the 3$\sp\prime$ end of the cDNA. This cDNA detects a 2.7 kb mRNA encoded by a transcription unit estimated to span approximately 50 kb of genomic DNA. The message is specifically expressed in all tissues affected in all forms of AN, namely within the presumptive iris, lens, neuroretina, the superficial layers of the cornea, the olfactory bulbs, and the cerebellum. Sequence analysis of the AN cDNA revealed a number of motifs characteristic of certain transcription factors. Chief among these are the presence of the paired domain, the homeodomain, and a carboxy-terminal domain rich in serine, threonine and proline residues. The overall structure shows high homology to the Drosophila segmentation gene paired and members of the murine Pax family of developmental control genes.^ Utilizing a conserved human genomic DNA sequence as probe, I was able to isolate an embryonic murine cDNA which is over 92% homologous in nucleotide sequence and virtually identical at the amino acid level to the human AN cDNA. The expression pattern of the murine gene is the same as that in man, supporting the conclusion that it probably corresponds to the Sey gene. Its specific expression in the neuroectodermal component of the eye, in glioblastomas, but not in the neural crest-derived PC12 pheochromocytoma cell line, suggests that a defect in neuroectodermal rather mesodermal development might be the common etiological factor underlying AN and Sey. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ECHo Collaboration (Electron Capture 163Ho aims to investigate the calorimetric spectrum following the electron capture decay of 163Ho to determine the mass of the electron neutrino. The size of the neutrino mass is reflected in the endpoint region of the spectrum, i.e., the last few eV below the transition energy. To check for systematic uncertainties, an independent determination of this transition energy, the Q-value, is mandatory. Using the TRIGA-TRAP setup, we demonstrate the feasibility of performing this measurement by Penning-trap mass spectrometry. With the currently available, purified 163Ho sample and an improved laser ablation mini-RFQ ion source, we were able to perform direct mass measurements of 163Ho and 163Dy with a sample size of less than 1017 atoms. The measurements were carried out by determining the ratio of the cyclotron frequencies of the two isotopes to those of carbon cluster ions using the time-of-flight ion cyclotron resonance method. The obtained mass excess values are ME(163Ho)= −66379.3(9) keV and ME(163Dy)= −66381.7(8) keV. In addition, the Q-value was measured for the first time by Penning-trap mass spectrometry to be Q = 2.5(7) keV.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper explores the process of creation of the netbook market by Taiwanese firms as an example of a disruptive innovation by latecomer firms. As an analytical framework, I employ the global value chain perspective to capture the dynamics of vertical inter-firm relationships that drive some firms in the chain to change the status quo of the industry. I then divide the process of the emergence of the netbook market into three consecutive stages, i.e. (1) the launch of the first-generation netbook by a Taiwanese firm named ASUSTeK, (2) the response of the two powerful platform leaders of the industry, Intel and Microsoft Intel, to ASUSTeK’s innovation, and (3) the market entry by another powerful Taiwanese firm, Acer, and explain how Taiwanese firms broke the Intel-centric market and tapped into the market-creating innovation opportunities that had been suppressed by the two powerful platform leaders. I also show that the creation of the netbook industry was an evolutionary process in which a series of responses by different industry players led to changes in the status quo of the industry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article analyses a number of social and cultural aspects of the blog phenomenon with the methodological aid of a complexity model, the New Techno-social Environment (hereinafter also referred to by its Spanish acronym, NET, or Nuevo Entorno Tecnosocial) together with the socio-technical approach of the two blogologist authors. Both authors are researchers interested in the new reality of the Digital Universal Network (DUN). After a review of some basic definitions, the article moves on to highlight some key characteristics of an emerging blog culture and relates them to the properties of the NET. Then, after a brief practical parenthesis for people entering the blogosphere for the first time, we present some reflections on blogs as an evolution of virtual communities and on the changes experienced by the inhabitants of the infocity emerging from within the NET. The article concludes with a somewhat disturbing question; whether among these changes there might not be a gradual transformation of the structure and form of human intelligence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Combining the kinematical definitions of the two dimensionless parameters, the deceleration q(x) and the Hubble t 0 H(x), we get a differential equation (where x=t/t 0 is the age of the universe relative to its present value t 0). First integration gives the function H(x). The present values of the Hubble parameter H(1) [approximately t 0 H(1)≈1], and the deceleration parameter [approximately q(1)≈−0.5], determine the function H(x). A second integration gives the cosmological scale factor a(x). Differentiation of a(x) gives the speed of expansion of the universe. The evolution of the universe that results from our approach is: an initial extremely fast exponential expansion (inflation), followed by an almost linear expansion (first decelerated, and later accelerated). For the future, at approximately t≈3t 0 there is a final exponential expansion, a second inflation that produces a disaggregation of the universe to infinity. We find the necessary and sufficient conditions for this disaggregation to occur. The precise value of the final age is given only with one parameter: the present value of the deceleration parameter [q(1)≈−0.5]. This emerging picture of the history of the universe represents an important challenge, an opportunity for the immediate research on the Universe. These conclusions have been elaborated without the use of any particular cosmological model of the universe

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The well-documented re-colonisation of the French large river basins of Loire and Rhone by European otter and beaver allowed the analysis of explanatory factors and threats to species movement in the river corridor. To what extent anthropogenic disturbance of the riparian zone influences the corridor functioning is a central question in the understanding of ecological networks and the definition of restoration goals for river networks. The generalist or specialist nature of target species might be determining for the responses to habitat quality and barriers in the riparian corridor. Detailed datasets of land use, human stressors and hydro-morphological characteristics of river segments for the entire river basins allowed identifying the habitat requirements of the two species for the riparian zone. The identified critical factors were entered in a network analysis based on the ecological niche factor approach. Significant responses to riparian corridor quality for forest cover, alterations of channel straightening and urbanisation and infrastructure in the riparian zone are observed for both species, so they may well serve as indicators for corridor functioning. The hypothesis for generalists being less sensitive to human disturbance was withdrawn, since the otter as generalist species responded strongest to hydro-morphological alterations and human presence in general. The beaver responded the strongest to the physical environment as expected for this specialist species. The difference in responses for generalist and specialist species is clearly present and the two species have a strong complementary indicator value. The interpretation of the network analysis outcomes stresses the need for an estimation of ecological requirements of more species in the evaluation of riparian corridor functioning and in conservation planning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, the capacity and the interference statistics of the uplink of high-altitude platforms (HAPs) for asynchronous and synchronous WCDMA system assuming finite transmission power and imperfect power control are studied. Propagation loss used to calculate the received signal power is due to the distance, shadowing, and wall insertion loss. The uplink capacity for 3- and 3.75-G services is given for different cell radius assuming outdoor and indoor voice users only, data users only and a combination of the two services. For 37 macrocells HAP, the total uplink capacity is 3,034 outdoor voice users or 444 outdoor data users. When one or more than one user is an indoor user, the uplink capacity is 2,923 voice users or 444 data users when the walls entry loss is 10 dB. It is shown that the effect of the adjacent channels interference is very small.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A one-dimensional inviscid slice model has been used to study numerically the influence of axial microgravity on the breaking of liquid bridges having a volume close to that of gravitationless minimum volume stability limit. Equilibrium shapes and stability limits have been obtained as well as the dependence of the volume of the two drops formed after breaking on both the length and the volume of the liquid bridge. The breaking process has also been studied experimentally. Good agreement has been found between theory and experiment for neutrally buoyant systems