957 resultados para Ferdinand II, King of the Two Sicilies.
Resumo:
Este trabajo trata sobre la famosa digresión de Tucídides en el libro sexto de su historia acerca de la caída de la tiranía en Atenas (Tuc. 6.54-59) y su relación con el relato de Heródoto. La digresión de Tucídides (y más específicamente su tono polémico) ha provocado controversia entre los comentadores, que han analizado a fondo las narrativas de los dos historiadores tanto desde una perspectiva histórica como historiográfica. Este estudio tiende a contribuir a esta discusión a través de tres sugerencias: la primera, Tucídides se mete no solo con la pequeña sección sobre los tiranicidas de la Historia de Heródoto (esto es Hdt. 5.55-65), sino, más bien, con toda la narrativa de la Historia de Heródoto sobre la liberación de Atenas de la tiranía que se extiende hasta el discurso de Socles (esto es Hdt. 5.55-5.96.2); segunda, las correcciones de Tucídides al relato de Heródoto son menores, tercera, dado que las divergencias de Tucídides con respecto a Heródoto no son decisivas para la versión correcta de los hechos, el tono polémico de Tucídides en su digresión resulta todavía más difícil de explicar. En este trabajo se sugiere tentativamente que la actitud polémica de Tucídides tiene más sentido si es interpretada en el contexto de la rivalidad del historiador con Heródoto
Resumo:
Under present climate conditions, convection at high latitudes of the North Pacific is restricted to shallower depths than in the North Atlantic. To what extent this asymmetry between the two ocean basins was maintained over the past 20 kyr is poorly known because there are few unambiguous proxy records of ventilation from the North Pacific. We present new data for two sediment cores from the California margin at 800 and 1600 m depth to argue that the depth of ventilation shifted repeatedly in the northeast Pacific over the course of deglaciation. The evidence includes benthic foraminiferal Cd/Ca, 18O/16O, and 13C/12C data as well as radiocarbon age differences between benthic and planktonic foraminifera. A number of features in the shallower of the two cores, including an interval of laminated sediments, are consistent with changes in ventilation over the past 20 kyr suggested by alternations between laminated and bioturbated sediments in the Santa Barbara Basin and the Gulf of California [Keigwin and Jones, 1990 doi:10.1029/PA005i006p01009; Kennett and Ingram, 1995 doi:10.1038/377510a0; Behl and Kennett, 1996 doi:10.1038/379243a0]. Data from the deeper of the two California margin cores suggest that during times of reduced ventilation at 800 m, ventilation was enhanced at 1600 m depth, and vice versa. This pronounced depth dependence of ventilation needs to be taken into account when exploring potential teleconnections between the North Pacific and the North Atlantic.
Resumo:
Particle mixing rates have been determined for 5 South Atlantic/Antarctic and 3 equatorial Pacific deep-sea cores using excess 210Pb and 32Si measurements. Radionuclide profiles from these siliceous, calcareous, and clay-rich sediments have been evaluated using a steady state vertical advection diffusion model. In Antarctic siliceous sediments210Pb mixing coefficients (0.04-0.16 cm**2/y) are in reasonable agreement with the 32Si mixing coefficient (0.2 or 0.4 cm**2/y, depending on 32Si half-life). In an equatorial Pacific sediment core, however, the 210Pb mixing coefficient (0.22 cm**2/y) is 3-7 times greater than the 32Si mixing coefficient (0.03 or 0.07 cm**2/y). The difference in 210Pb and 32Si mixing rates in the Pacific sediments results from: (1) non-steady state mixing and differences in characteristic time and depth scales of the two radionuclides, (2) preferential mixing of fine-grained clay particles containing most of the 210Pb activity relative to coarser particles (large radiolaria) containing the 32Si activity, or (3) the supply of 222Rn from the bottom of manganese nodules which increases the measured excess 210Pb activity (relative to 226Ra) at depth and artificially increases the 210Pb mixing coefficient. Based on 32Si data and pore water silica profiles, dissolution of biogenic silica in the sediment column appears to have a minor effect on the 32Si profile in the mixed layer. Deep-sea particle mixing rates reported in this study and the literature do not correlate with sediment type, sediment accumulation rate, or surface productivity. Based on differences in mixing rate among three Antarctic cores collected within 50 km of each other, local variability in the intensity of deep-sea mixing appears to be as important as regional differences in sediment properties.
Resumo:
The effects of ocean acidification and increased temperature on physiology of six strains of the polar diatom Fragilariopsis cylindrus from Greenland were investigated. Experiments were performed under manipulated pH levels (8.0, 7.7, 7.4, and 7.1) and different temperatures (1, 5, and 8 °C) to simulate changes from present to plausible future levels. Each of the 12 scenarios was run for 7 days, and a significant interaction between temperature and pH on growth was detected. By combining increased temperature and acidification, the two factors counterbalanced each other, and therefore no effect on the growth rates was found. However, the growth rates increased with elevated temperatures by 20-50% depending on the strain. In addition, a general negative effect of increasing acidification on growth was observed. At pH 7.7 and 7.4, the growth response varied considerably among strains. However, a more uniform response was detected at pH 7.1 with most of the strains exhibiting reduced growth rates by 20-37% compared to pH 8.0. It should be emphasized that a significant interaction between temperature and pH was found, meaning that the combination of the two parameters affected growth differently than when considering one at a time. Based on these results, we anticipate that the polar diatom F. cylindrus will be unaffected by changes in temperature and pH within the range expected by the end of the century. In each simulated scenario, the variation in growth rates among the strains was larger than the variation observed due to the whole range of changes in either pH or temperature. Climate change may therefore not affect the species as such, but may lead to changes in the population structure of the species, with the strains exhibiting high phenotypic plasticity, in terms of temperature and pH tolerance towards future conditions, dominating the population.
Resumo:
It has been argued that poor productive performance is one of critical sources of stagnation of the African manufacturing sector, but firm-level empirical supports are limited. Using the inter-regional firm data of the garment industry, technical efficiency and its contribution to competitiveness measured as unit costs were compared between Kenyan and Bangladeshi firms. Our estimates indicated that there is no significant gap in the average technical efficiency of the two industries despite conservative estimation, although unit costs greatly differ between the two industries. Higher unit cost in Kenyan firms mainly stems from high labour cost, while impact of inefficiency is quite small. Productivity accounts little for the stagnation of garment industry in several African countries.
Resumo:
This paper explores the process of creation of the netbook market by Taiwanese firms as an example of a disruptive innovation by latecomer firms. As an analytical framework, I employ the global value chain perspective to capture the dynamics of vertical inter-firm relationships that drive some firms in the chain to change the status quo of the industry. I then divide the process of the emergence of the netbook market into three consecutive stages, i.e. (1) the launch of the first-generation netbook by a Taiwanese firm named ASUSTeK, (2) the response of the two powerful platform leaders of the industry, Intel and Microsoft Intel, to ASUSTeK’s innovation, and (3) the market entry by another powerful Taiwanese firm, Acer, and explain how Taiwanese firms broke the Intel-centric market and tapped into the market-creating innovation opportunities that had been suppressed by the two powerful platform leaders. I also show that the creation of the netbook industry was an evolutionary process in which a series of responses by different industry players led to changes in the status quo of the industry.
Resumo:
This article analyses a number of social and cultural aspects of the blog phenomenon with the methodological aid of a complexity model, the New Techno-social Environment (hereinafter also referred to by its Spanish acronym, NET, or Nuevo Entorno Tecnosocial) together with the socio-technical approach of the two blogologist authors. Both authors are researchers interested in the new reality of the Digital Universal Network (DUN). After a review of some basic definitions, the article moves on to highlight some key characteristics of an emerging blog culture and relates them to the properties of the NET. Then, after a brief practical parenthesis for people entering the blogosphere for the first time, we present some reflections on blogs as an evolution of virtual communities and on the changes experienced by the inhabitants of the infocity emerging from within the NET. The article concludes with a somewhat disturbing question; whether among these changes there might not be a gradual transformation of the structure and form of human intelligence.
Resumo:
One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.
Resumo:
An analytical solution of the two body problem perturbed by a constant tangential acceleration is derived with the aid of perturbation theory. The solution, which is valid for circular and elliptic orbits with generic eccentricity, describes the instantaneous time variation of all orbital elements. A comparison with high-accuracy numerical results shows that the analytical method can be effectively applied to multiple-revolution low-thrust orbit transfer around planets and in interplanetary space with negligible error.
Resumo:
Combining the kinematical definitions of the two dimensionless parameters, the deceleration q(x) and the Hubble t 0 H(x), we get a differential equation (where x=t/t 0 is the age of the universe relative to its present value t 0). First integration gives the function H(x). The present values of the Hubble parameter H(1) [approximately t 0 H(1)≈1], and the deceleration parameter [approximately q(1)≈−0.5], determine the function H(x). A second integration gives the cosmological scale factor a(x). Differentiation of a(x) gives the speed of expansion of the universe. The evolution of the universe that results from our approach is: an initial extremely fast exponential expansion (inflation), followed by an almost linear expansion (first decelerated, and later accelerated). For the future, at approximately t≈3t 0 there is a final exponential expansion, a second inflation that produces a disaggregation of the universe to infinity. We find the necessary and sufficient conditions for this disaggregation to occur. The precise value of the final age is given only with one parameter: the present value of the deceleration parameter [q(1)≈−0.5]. This emerging picture of the history of the universe represents an important challenge, an opportunity for the immediate research on the Universe. These conclusions have been elaborated without the use of any particular cosmological model of the universe
Resumo:
The well-documented re-colonisation of the French large river basins of Loire and Rhone by European otter and beaver allowed the analysis of explanatory factors and threats to species movement in the river corridor. To what extent anthropogenic disturbance of the riparian zone influences the corridor functioning is a central question in the understanding of ecological networks and the definition of restoration goals for river networks. The generalist or specialist nature of target species might be determining for the responses to habitat quality and barriers in the riparian corridor. Detailed datasets of land use, human stressors and hydro-morphological characteristics of river segments for the entire river basins allowed identifying the habitat requirements of the two species for the riparian zone. The identified critical factors were entered in a network analysis based on the ecological niche factor approach. Significant responses to riparian corridor quality for forest cover, alterations of channel straightening and urbanisation and infrastructure in the riparian zone are observed for both species, so they may well serve as indicators for corridor functioning. The hypothesis for generalists being less sensitive to human disturbance was withdrawn, since the otter as generalist species responded strongest to hydro-morphological alterations and human presence in general. The beaver responded the strongest to the physical environment as expected for this specialist species. The difference in responses for generalist and specialist species is clearly present and the two species have a strong complementary indicator value. The interpretation of the network analysis outcomes stresses the need for an estimation of ecological requirements of more species in the evaluation of riparian corridor functioning and in conservation planning.
Resumo:
In this work, the capacity and the interference statistics of the uplink of high-altitude platforms (HAPs) for asynchronous and synchronous WCDMA system assuming finite transmission power and imperfect power control are studied. Propagation loss used to calculate the received signal power is due to the distance, shadowing, and wall insertion loss. The uplink capacity for 3- and 3.75-G services is given for different cell radius assuming outdoor and indoor voice users only, data users only and a combination of the two services. For 37 macrocells HAP, the total uplink capacity is 3,034 outdoor voice users or 444 outdoor data users. When one or more than one user is an indoor user, the uplink capacity is 2,923 voice users or 444 data users when the walls entry loss is 10 dB. It is shown that the effect of the adjacent channels interference is very small.
Resumo:
The focus of this paper is to outline the practical experiences and the lessons learned derived from the assessment of the requirements management process in two industrial case studies. Furthermore this paper explains the main structure of an alternative assessment approach that has been used in the appraisal of the two case studies. The assessment approach helped us to know the current state of the organizational requirement management process. We have to point out that these practical experiences and the lessons learned can be helpful to reduce risks and costs of the on-site assessment process.
Resumo:
A one-dimensional inviscid slice model has been used to study numerically the influence of axial microgravity on the breaking of liquid bridges having a volume close to that of gravitationless minimum volume stability limit. Equilibrium shapes and stability limits have been obtained as well as the dependence of the volume of the two drops formed after breaking on both the length and the volume of the liquid bridge. The breaking process has also been studied experimentally. Good agreement has been found between theory and experiment for neutrally buoyant systems
Resumo:
Two scientific schools have been in coexistence from the beginning of genetics, one of them searching for factors of inheritance and the other one applying biometrical models to study the relationships between relatives. With the development of molecular genetics, the possibilities of detecting genes having a noticeable effect in traits augmented. Some genes with large or medium effects were localized in animals, although the most common result was to detect markers linked to these genes, allowing the possibility of assisting selection programs with markers. When a large amount of simple and inexpensive markers were available, the SNPs, new possibilities were opened since they did not need the presence of genes of large or medium effect controlling a trait, because the whole genome was scanned. Using a large amount of SNPs permits having a prediction of the breeding value at birth accurate enough to be used in some cases, like dairy cattle, to halve its generation interval. In other animal breeding programs, the implementation of genomic selection is less clear and the way in which it can be useful should be carefully studied. The need for large populations for associating phenotypic data and markers, plus the need for repeating the process continuously, complicates its application in some cases. The implementation of the information provided by the SNPs in current genetic programs has led to the development of complex statistical tools, joining the efforts of the two schools, factorial and biometrical, that nowadays work closely related.