970 resultados para California Academy of Sciences
Resumo:
We study the effect of parameter fluctuations and the resultant multiplicative noise on the synchronization of coupled chaotic systems. We introduce a new quantity, the fluctuation rate Ф as the number of perturbations occurring to the parameter in unit time. It is shown that ϕ is the most significant quantity that determines the quality of synchronization. It is found that parameter fluctuations with high fluctuation rates do not destroy synchronization, irrespective of the statistical features of the fluctuations. We also present a quasi-analytic explanation to the relation between ϕ and the error in synchrony.
Resumo:
The chaotic dynamics of directly modulated semiconductor lasers with delayed optoelectronic feedback is studied numerically. The effects of positive and negative delayed optoelectronic feedback in producing chaotic outputs from such lasers with nonlinear gain reduction in its optimum value range is investigated using bifurcation diagrams. The results are confirmed by calculating the Lyapunov exponents. A negative delayed optoelectronic feedback configuration is found to be more effective in inducing chaotic dynamics to such systems with nonlinear gain reduction factor in the practical value range.
Resumo:
The effect of coupling two chaotic Nd:YAG lasers with intracavity KTP crystal for frequency doubling is numerically studied for the case of the laser operating in three longitudinal modes. It is seen that the system goes from chaotic to periodic and then to steady state as the coupling constant is increased. The intensity time series and phase diagrams are drawn and the Lyapunov characteristic exponent is calculated to characterize the chaotic and periodic regions.
Resumo:
Thunderstorm, resulting from vigorous convective activity, is one of the most spectacular weather phenomena in the atmosphere. A common feature of the weather during the pre-monsoon season over the Indo-Gangetic Plain and northeast India is the outburst of severe local convective storms, commonly known as ‘Nor’westers’(as they move from northwest to southeast). The severe thunderstorms associated with thunder, squall lines, lightning and hail cause extensive losses in agricultural, damage to structure and also loss of life. In this paper, sensitivity experiments have been conducted with the Non-hydrostatic Mesoscale Model (NMM) to test the impact of three microphysical schemes in capturing the severe thunderstorm event occurred over Kolkata on 15 May 2009. The results show that the WRF-NMM model with Ferrier microphysical scheme appears to reproduce the cloud and precipitation processes more realistically than other schemes. Also, we have made an attempt to diagnose four severe thunderstorms that occurred during pre-monsoon seasons of 2006, 2007 and 2008 through the simulated radar reflectivity fields from NMM model with Ferrier microphysics scheme and validated the model results with Kolkata Doppler Weather Radar (DWR) observations. Composite radar reflectivity simulated by WRF-NMM model clearly shows the severe thunderstorm movement as observed by DWR imageries, but failed to capture the intensity as in observations. The results of these analyses demonstrated the capability of high resolution WRF-NMM model in the simulation of severe thunderstorm events and determined that the 3 km model improve upon current abilities when it comes to simulating severe thunderstorms over east Indian region
Resumo:
The phytoplankton standing crop was assessed in detail along the South Eastern Arabian Sea (SEAS) during the different phases of coastal upwelling in 2009.During phase 1 intense upwelling was observed along the southern transects (8◦N and 8.5◦N). The maximum chlorophyll a concentration (22.7 mg m −3) was observed in the coastal waters off Thiruvananthapuram (8.5◦N). Further north there was no signature of upwelling, with extensive Trichodesmium erythraeum blooms. Diatoms dominated in these upwelling regions with the centric diatom Chaetoceros curvisetus being the dominant species along the 8◦N transect. Along the 8.5◦N transect pennate diatoms like Nitzschia seriata and Pseudo-nitzschia sp. dominated. During phase 2, upwelling of varying intensity was observed throughout the study area with maximum chlorophyll a concentrations along the 9◦N transect (25 mg m−3) with Chaetoceros curvisetus as the dominant phytoplankton. Along the 8.5◦N transect pennate diatoms during phase 1 were replaced by centric diatoms like Chaetoceros sp. The presence of solitary pennate diatoms Amphora sp. and Navicula sp. were significant in the waters off Kochi. Upwelling was waning during phase 3 and was confined to the coastal waters of the southern transects with the highest chlorophyll a concentration of 11.2 mg m−3. Along with diatoms, dinoflagellate cell densities increased in phases 2 and 3. In the northern transects (9◦N and 10◦N) the proportion of dinoflagellates was comparatively higher and was represented mainly by Protoperidinium spp., Ceratium spp. and Dinophysis spp.
Resumo:
Geochemical composition is a set of data for predicting the climatic condition existing in an ecosystem. Both the surficial and core sediment geochemistry are helpful in monitoring, assessing and evaluating the marine environment. The aim of the research work is to assess the relationship between the biogeochemical constituents in the Cochin Estuarine System (CES), their modifications after a long period of anoxia and also to identify the various processes which control the sediment composition in this region, through a multivariate statistical approach. Therefore the study of present core sediment geochemistry has a critical role in unraveling the benchmark of their characterization. Sediment cores from four prominent zones of CES were examined for various biogeochemical aspects. The results have served as rejuvenating records for the prediction of core sediment status prevailing in the CES
Resumo:
The distribution and accumulation of the rare earth elements (REE) in the sediments of the Cochin Estuary and adjacent continental shelf were investigated. The rare earth elements like La, Ce, Pr, Nd, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb, Lu and the heavy metals like Mg, V, Cr, Mn, Fe, Cu, Zn, U, Th were analysed by using standard analytical methods. The Post-Archean Australian Shale composition was used to normalise the rare earth elements. It was found that the sediments were more enriched with the lighter rare earth elements than the heavier ones. The positive correlation between the concentrations of REE, Fe and Mn could explain the precipitation of oxyhydroxides in the study area. The factor analysis and correlation analysis suggest common sources of origin for the REEs. From the Ce-anomalies calculated, it was found that an oxic environment predominates in all stations except the station No. 2. The Eu-anomaly gave an idea that the origin of REEs may be from the feldspar. The parameters like total organic carbon, U/Th ratio, authigenic U, Cu/Zn, V/Cr ratios revealed the oxic environment and thus the depositional behaviour of REEs in the region
Resumo:
A review of relativistic atomic structure calculations is given with a emphasis on the Multiconfigurational-Dirac-Fock method. Its problems and deficiencies are discussed together with the contributions which go beyond the Dirac-Fock procedure.
Resumo:
Following an earlier observation in F VI we identified the line pair 1s2s2p^2 {^5P}-1s2s2p3d {^5P^0} , {^5D^0} for the elements N, O, Mg, and tentatively for A1 and Si in beam-foil spectra. Assignment was established by comparison with Multi-Configuration Dirac-Fock calculations along the isoelectronic sequence. Using this method we also identified some quartet lines of lithium-like ions with Z > 10.
Resumo:
Abstract Big data nowadays is a fashionable topic, independently of what people mean when they use this term. But being big is just a matter of volume, although there is no clear agreement in the size threshold. On the other hand, it is easy to capture large amounts of data using a brute force approach. So the real goal should not be big data but to ask ourselves, for a given problem, what is the right data and how much of it is needed. For some problems this would imply big data, but for the majority of the problems much less data will and is needed. In this talk we explore the trade-offs involved and the main problems that come with big data using the Web as case study: scalability, redundancy, bias, noise, spam, and privacy. Speaker Biography Ricardo Baeza-Yates Ricardo Baeza-Yates is VP of Research for Yahoo Labs leading teams in United States, Europe and Latin America since 2006 and based in Sunnyvale, California, since August 2014. During this time he has lead the labs in Barcelona and Santiago de Chile. Between 2008 and 2012 he also oversaw the Haifa lab. He is also part time Professor at the Dept. of Information and Communication Technologies of the Universitat Pompeu Fabra, in Barcelona, Spain. During 2005 he was an ICREA research professor at the same university. Until 2004 he was Professor and before founder and Director of the Center for Web Research at the Dept. of Computing Science of the University of Chile (in leave of absence until today). He obtained a Ph.D. in CS from the University of Waterloo, Canada, in 1989. Before he obtained two masters (M.Sc. CS & M.Eng. EE) and the electronics engineer degree from the University of Chile in Santiago. He is co-author of the best-seller Modern Information Retrieval textbook, published in 1999 by Addison-Wesley with a second enlarged edition in 2011, that won the ASIST 2012 Book of the Year award. He is also co-author of the 2nd edition of the Handbook of Algorithms and Data Structures, Addison-Wesley, 1991; and co-editor of Information Retrieval: Algorithms and Data Structures, Prentice-Hall, 1992, among more than 500 other publications. From 2002 to 2004 he was elected to the board of governors of the IEEE Computer Society and in 2012 he was elected for the ACM Council. He has received the Organization of American States award for young researchers in exact sciences (1993), the Graham Medal for innovation in computing given by the University of Waterloo to distinguished ex-alumni (2007), the CLEI Latin American distinction for contributions to CS in the region (2009), and the National Award of the Chilean Association of Engineers (2010), among other distinctions. In 2003 he was the first computer scientist to be elected to the Chilean Academy of Sciences and since 2010 is a founding member of the Chilean Academy of Engineering. In 2009 he was named ACM Fellow and in 2011 IEEE Fellow.
Resumo:
ANTECEDENTES: El aislamiento de células fetales libres o ADN fetal en sangre materna abre una ventana de posibilidades diagnósticas no invasivas para patologías monogénicas y cromosómicas, además de permitir la identificación del sexo y del RH fetal. Actualmente existen múltiples estudios que evalúan la eficacia de estos métodos, mostrando resultados costo-efectivos y de menor riesgo que el estándar de oro. Este trabajo describe la evidencia encontrada acerca del diagnóstico prenatal no invasivo luego de realizar una revisión sistemática de la literatura. OBJETIVOS: El objetivo de este estudio fue reunir la evidencia que cumpla con los criterios de búsqueda, en el tema del diagnóstico fetal no invasivo por células fetales libres en sangre materna para determinar su utilidad diagnóstica. MÉTODOS: Se realizó una revisión sistemática de la literatura con el fin de determinar si el diagnóstico prenatal no invasivo por células fetales libres en sangre materna es efectivo como método de diagnóstico. RESULTADOS: Se encontraron 5,893 artículos que cumplían con los criterios de búsqueda; 67 cumplieron los criterios de inclusión: 49.3% (33/67) correspondieron a estudios de corte transversal, 38,8% (26/67) a estudios de cohortes y el 11.9% (8/67) a estudios casos y controles. Se obtuvieron resultados de sensibilidad, especificidad y tipo de prueba. CONCLUSIÓN: En la presente revisión sistemática, se evidencia como el diagnóstico prenatal no invasivo es una técnica feasible, reproducible y sensible para el diagnóstico fetal, evitando el riesgo de un diagnóstico invasivo.
Resumo:
La computación evolutiva y muy especialmente los algoritmos genéticos son cada vez más empleados en las organizaciones para resolver sus problemas de gestión y toma de decisiones (Apoteker & Barthelemy, 2000). La literatura al respecto es creciente y algunos estados del arte han sido publicados. A pesar de esto, no hay un trabajo explícito que evalúe de forma sistemática el uso de los algoritmos genéticos en problemas específicos de los negocios internacionales (ejemplos de ello son la logística internacional, el comercio internacional, el mercadeo internacional, las finanzas internacionales o estrategia internacional). El propósito de este trabajo de grado es, por lo tanto, realizar un estado situacional de las aplicaciones de los algoritmos genéticos en los negocios internacionales.