932 resultados para computation- and data-intensive applications


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The time division multiple access (TDMA) based channel access mechanisms perform better than the contention based channel access mechanisms, in terms of channel utilization, reliability and power consumption, specially for high data rate applications in wireless sensor networks (WSNs). Most of the existing distributed TDMA scheduling techniques can be classified as either static or dynamic. The primary purpose of static TDMA scheduling algorithms is to improve the channel utilization by generating a schedule of smaller length. But, they usually take longer time to schedule, and hence, are not suitable for WSNs, in which the network topology changes dynamically. On the other hand, dynamic TDMA scheduling algorithms generate a schedule quickly, but they are not efficient in terms of generated schedule length. In this paper, we propose a novel scheme for TDMA scheduling in WSNs, which can generate a compact schedule similar to static scheduling algorithms, while its runtime performance can be matched with those of dynamic scheduling algorithms. Furthermore, the proposed distributed TDMA scheduling algorithm has the capability to trade-off schedule length with the time required to generate the schedule. This would allow the developers of WSNs, to tune the performance, as per the requirement of prevalent WSN applications, and the requirement to perform re-scheduling. Finally, the proposed TDMA scheduling is fault-tolerant to packet loss due to erroneous wireless channel. The algorithm has been simulated using the Castalia simulator to compare its performance with those of others in terms of generated schedule length and the time required to generate the TDMA schedule. Simulation results show that the proposed algorithm generates a compact schedule in a very less time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Eu3+-activated BaMoO4 phosphors were synthesized by the nitrate citrate gel combustion method. The Rietveld refinement analysis confirmed that all the compounds were crystallized in the scheelite-type tetragonal structure with I4(1)/a (No. 88) space group. Photoluminescence (PL) spectra of BaMoO4 phosphor reveals broad emission peaks at 465 and 605 nm, whereas the Eu3+-activated BaMoO4 phosphors show intense 615 nm (D-5(0) -> F-7(2)) emission peak. Judd-Ofelt theory was applied to evaluate the intensity parameters (Omega(2), Omega(4)) of Eu3+-activated BaMoO4 phosphors. The transition probabilities (A(T)), radiative lifetime (tau(rad)), branching ratio (beta), stimulated emission cross-section (sigma(e)), gain bandwidth (sigma(e) x Delta lambda(eff)) and optical gain (sigma(e) x tau(rad)) were investigated by using the intensity parameters. CIE color coordinates confirmed that the BaMoO4 and Eu3+-activated BaMoO4 phosphors exhibit white and red luminescence, respectively. The obtained results revealed that the present phosphors can be a potential candidate for red lasers and white LEDs applications. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motivated by multi-distribution divergences, which originate in information theory, we propose a notion of `multipoint' kernels, and study their applications. We study a class of kernels based on Jensen type divergences and show that these can be extended to measure similarity among multiple points. We study tensor flattening methods and develop a multi-point (kernel) spectral clustering (MSC) method. We further emphasize on a special case of the proposed kernels, which is a multi-point extension of the linear (dot-product) kernel and show the existence of cubic time tensor flattening algorithm in this case. Finally, we illustrate the usefulness of our contributions using standard data sets and image segmentation tasks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using atomistic molecular dynamics simulation, we study the discotic columnar liquid crystalline (LC) phases formed by a new organic compound having hexa-peri-Hexabenzocoronene (HBC) core with six pendant oligothiophene units recently synthesized by Nan Hu et al. Adv. Mater. 26, 2066 (2014)]. This HBC core based LC phase was shown to have electric field responsive behavior and has important applications in organic electronics. Our simulation results confirm the hexagonal arrangement of columnar LC phase with a lattice spacing consistent with that obtained from small angle X-ray diffraction data. We have also calculated various positional and orientational correlation functions to characterize the ordering of the molecules in the columnar arrangement. The molecules in a column are arranged with an average twist of 25 degrees having an average inter-molecular separation of similar to 5 angstrom. Interestingly, we find an overall tilt angle of 43 degrees between the columnar axis and HBC core. We also simulate the charge transport through this columnar phase and report the numerical value of charge carrier mobility for this liquid crystal phase. The charge carrier mobility is strongly influenced by the twist angle and average spacing of the molecules in the column. (C) 2015 AIP Publishing LLC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we consider applying derived knowledge base regarding the sensitivity and specificity of damage(s) to be detected by an SHM system being designed and qualified. These efforts are necessary toward developing capabilities in SHM system to classify reliably various probable damages through sequence of monitoring, i.e., damage precursor identification, detection of damage and monitoring its progression. We consider the particular problem of visual and ultrasonic NDE based SHM system design requirements, where the damage detection sensitivity and specificity data definitions for a class of structural components are established. Methodologies for SHM system specification creation are discussed in details. Examples are shown to illustrate how the physics of damage detection scheme limits particular damage detection sensitivity and specificity and further how these information can be used in algorithms to combine various different NDE schemes in an SHM system to enhance efficiency and effectiveness. Statistical and data driven models to determine the sensitivity and probability of damage detection (POD) has been demonstrated for plate with varying one-sided line crack using optical and ultrasonic based inspection techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The mapping and geospatial analysis of benthic environments are multidisciplinary tasks that have become more accessible in recent years because of advances in technology and cost reductions in survey systems. The complex relationships that exist among physical, biological, and chemical seafloor components require advanced, integrated analysis techniques to enable scientists and others to visualize patterns and, in so doing, allow inferences to be made about benthic processes. Effective mapping, analysis, and visualization of marine habitats are particularly important because the subtidal seafloor environment is not readily viewed directly by eye. Research in benthic environments relies heavily, therefore, on remote sensing techniques to collect effective data. Because many benthic scientists are not mapping professionals, they may not adequately consider the links between data collection, data analysis, and data visualization. Projects often start with clear goals, but may be hampered by the technical details and skills required for maintaining data quality through the entire process from collection through analysis and presentation. The lack of technical understanding of the entire data handling process can represent a significant impediment to success. While many benthic mapping efforts have detailed their methodology as it relates to the overall scientific goals of a project, only a few published papers and reports focus on the analysis and visualization components (Paton et al. 1997, Weihe et al. 1999, Basu and Saxena 1999, Bruce et al. 1997). In particular, the benthic mapping literature often briefly describes data collection and analysis methods, but fails to provide sufficiently detailed explanation of particular analysis techniques or display methodologies so that others can employ them. In general, such techniques are in large part guided by the data acquisition methods, which can include both aerial and water-based remote sensing methods to map the seafloor without physical disturbance, as well as physical sampling methodologies (e.g., grab or core sampling). The terms benthic mapping and benthic habitat mapping are often used synonymously to describe seafloor mapping conducted for the purpose of benthic habitat identification. There is a subtle yet important difference, however, between general benthic mapping and benthic habitat mapping. The distinction is important because it dictates the sequential analysis and visualization techniques that are employed following data collection. In this paper general seafloor mapping for identification of regional geologic features and morphology is defined as benthic mapping. Benthic habitat mapping incorporates the regional scale geologic information but also includes higher resolution surveys and analysis of biological communities to identify the biological habitats. In addition, this paper adopts the definition of habitats established by Kostylev et al. (2001) as a “spatially defined area where the physical, chemical, and biological environment is distinctly different from the surrounding environment.” (PDF contains 31 pages)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Alliance for Coastal Technologies (ACT) Workshop "Applications of in situ Fluorometers in Nearshore Waters" was held in Cape Elizabeth, Maine, February 2-4,2005, with sponsorship by the Gulf of Maine Ocean Observing System (GoMOOS), one of the ACT partner organization. The purpose of the workshop was to explore recent trends in fluorometry as it relates to resource management applications in nearshore environments. Participants included representatives from state and federal environmental management agencies as well as research institutions, many of whom are currently using this technology in their research and management applications. Manufacturers and developers of fluorometric measuring systems also attended the meeting. The Workshop attendees discussed the historical and present uses of fluorometry technology and identified the great potential for its use by coastal managers to fulfill their regulatory and management objectives. Participants also identified some of the challenges associated with the correct use of Fluorometers to estimate biomass and the rate of primary productivity. The Workshop concluded that in order to expand the existing use of fluorometers in both academic and resource management disciplines, several issues concerning data collection, instrument calibration, and data interpretation needed to be addressed. Participants identified twelve recommendations, the top five of which are listed below: Recommendations 1) Develop a "Guide" that describes the most important aspects of fluorescence measurements. This guide should be written by an expert party, with both research and industry input, and should be distributed by all manufacturers with their instrumentation. The guide should also be made available on the ACT website as well as those of other relevant organizations. The guide should include discussions on the following topics: The benefits of using fluorometers in research and resource management applications; What fluorometers can and cannot provide in terms of measurements; The necessary assumptions required before applying fluorometry; Characterization and calibration of fluorometers; (pdf contains 32 pages)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The scalability of CMOS technology has driven computation into a diverse range of applications across the power consumption, performance and size spectra. Communication is a necessary adjunct to computation, and whether this is to push data from node-to-node in a high-performance computing cluster or from the receiver of wireless link to a neural stimulator in a biomedical implant, interconnect can take up a significant portion of the overall system power budget. Although a single interconnect methodology cannot address such a broad range of systems efficiently, there are a number of key design concepts that enable good interconnect design in the age of highly-scaled CMOS: an emphasis on highly-digital approaches to solving ‘analog’ problems, hardware sharing between links as well as between different functions (such as equalization and synchronization) in the same link, and adaptive hardware that changes its operating parameters to mitigate not only variation in the fabrication of the link, but also link conditions that change over time. These concepts are demonstrated through the use of two design examples, at the extremes of the power and performance spectra.

A novel all-digital clock and data recovery technique for high-performance, high density interconnect has been developed. Two independently adjustable clock phases are generated from a delay line calibrated to 2 UI. One clock phase is placed in the middle of the eye to recover the data, while the other is swept across the delay line. The samples produced by the two clocks are compared to generate eye information, which is used to determine the best phase for data recovery. The functions of the two clocks are swapped after the data phase is updated; this ping-pong action allows an infinite delay range without the use of a PLL or DLL. The scheme's generalized sampling and retiming architecture is used in a sharing technique that saves power and area in high-density interconnect. The eye information generated is also useful for tuning an adaptive equalizer, circumventing the need for dedicated adaptation hardware.

On the other side of the performance/power spectra, a capacitive proximity interconnect has been developed to support 3D integration of biomedical implants. In order to integrate more functionality while staying within size limits, implant electronics can be embedded onto a foldable parylene (‘origami’) substrate. Many of the ICs in an origami implant will be placed face-to-face with each other, so wireless proximity interconnect can be used to increase communication density while decreasing implant size, as well as facilitate a modular approach to implant design, where pre-fabricated parylene-and-IC modules are assembled together on-demand to make custom implants. Such an interconnect needs to be able to sense and adapt to changes in alignment. The proposed array uses a TDC-like structure to realize both communication and alignment sensing within the same set of plates, increasing communication density and eliminating the need to infer link quality from a separate alignment block. In order to distinguish the communication plates from the nearby ground plane, a stimulus is applied to the transmitter plate, which is rectified at the receiver to bias a delay generation block. This delay is in turn converted into a digital word using a TDC, providing alignment information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Noncommutative geometry is a source of particle physics models with matter Lagrangians coupled to gravity. One may associate to any noncommutative space (A, H, D) its spectral action, which is defined in terms of the Dirac spectrum of its Dirac operator D. When viewing a spin manifold as a noncommutative space, D is the usual Dirac operator. In this paper, we give nonperturbative computations of the spectral action for quotients of SU(2), Bieberbach manifolds, and SU(3) equipped with a variety of geometries. Along the way we will compute several Dirac spectra and refer to applications of this computation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In addition to providing vital ecological services, coastal areas of North Carolina provide prized areas for habitation, recreation, and commercial fisheries. However, from a management perspective, the coasts of North Carolina are highly variable and complex. In-water constituents such as nutrients, suspended sediments, and chlorophyll a concentration can vary significantly over a broad spectrum of time and space scales. Rapid growth and land-use change continue to exert pressure on coastal lands. Coastal environments are also very vulnerable to short-term (e.g., hurricanes) and long-term (e.g., sea-level rise) natural changes that can result in significant loss of life, economic loss, or changes in coastal ecosystem functioning. Hence, the dynamic nature, effects of human-induced change over time, and vulnerability of coastal areas make it difficult to effectively monitor and manage these important state and national resources using traditional data collection technologies such as discrete monitoring stations and field surveys. In general, these approaches provide only a sparse network of data over limited time and space scales and generally are expensive and labor-intensive. Products derived from spectral images obtained by remote sensing instruments provide a unique vantage point from which to examine the dynamic nature of coastal environments. A primary advantage of remote sensing is that the altitude of observation provides a large-scale synoptic view relative to traditional field measurements. Equally important, the use of remote sensing for a broad range of research and environmental applications is now common due to major advances in data availability, data transfer, and computer technologies. To facilitate the widespread use of remote sensing products in North Carolina, the UNC Coastal Studies Institute (UNC-CSI) is developing the capability to acquire, process, and analyze remotely sensed data from several remote sensing instruments. In particular, UNC-CSI is developing regional remote sensing algorithms to examine the mobilization, transport, transformation, and fate of materials between coupled terrestrial and coastal ocean systems. To illustrate this work, we present the basic principles of remote sensing of coastal waters in the context of deriving information that supports efficient and effective management of coastal resources. (PDF contains 4 pages)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Plate tectonics shapes our dynamic planet through the creation and destruction of lithosphere. This work focuses on increasing our understanding of the processes at convergent and divergent boundaries through geologic and geophysical observations at modern plate boundaries. Recent work had shown that the subducting slab in central Mexico is most likely the flattest on Earth, yet there was no consensus about what caused it to originate. The first chapter of this thesis sets out to systematically test all previously proposed mechanisms for slab flattening on the Mexican case. What we have discovered is that there is only one model for which we can find no contradictory evidence. The lack of applicability of the standard mechanisms used to explain flat subduction in the Mexican example led us to question their applications globally. The second chapter expands the search for a cause of flat subduction, in both space and time. We focus on the historical record of flat slabs in South America and look for a correlation between the shallowing and steepening of slab segments with relation to the inferred thickness of the subducting oceanic crust. Using plate reconstructions and the assumption that a crustal anomaly formed on a spreading ridge will produce two conjugate features, we recreate the history of subduction along the South American margin and find that there is no correlation between the subduction of a bathymetric highs and shallow subduction. These studies have proven that a subducting crustal anomaly is neither a sufficient or necessary condition of flat slab subduction. The final chapter in this thesis looks at the divergent plate boundary in the Gulf of California. Through geologic reconnaissance mapping and an intensive paleomagnetic sampling campaign, we try to constrain the location and orientation of a widespread volcanic marker unit, the Tuff of San Felipe. Although the resolution of the applied magnetic susceptibility technique proved inadequate to contain the direction of the pyroclastic flow with high precision, we have been able to detect the tectonic rotation of coherent blocks as well as rotation within blocks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Commercially available software packages for IBM PC-compatibles are evaluated to use for data acquisition and processing work. Moss Landing Marine Laboratories (MLML) acquired computers since 1978 to use on shipboard data acquisition (Le. CTD, radiometric, etc.) and data processing. First Hewlett-Packard desktops were used then a transition to the DEC VAXstations, with software developed mostly by the author and others at MLML (Broenkow and Reaves, 1993; Feinholz and Broenkow, 1993; Broenkow et al, 1993). IBM PC were at first very slow and limited in available software, so they were not used in the early days. Improved technology such as higher speed microprocessors and a wide range of commercially available software made use of PC more reasonable today. MLML is making a transition towards using the PC for data acquisition and processing. Advantages are portability and available outside support.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Experimental work was performed to delineate the system of digested sludge particles and associated trace metals and also to measure the interactions of sludge with seawater. Particle-size and particle number distributions were measured with a Coulter Counter. Number counts in excess of 1012 particles per liter were found in both the City of Los Angeles Hyperion mesophilic digested sludge and the Los Angeles County Sanitation Districts (LACSD) digested primary sludge. More than 90 percent of the particles had diameters less than 10 microns.

Total and dissolved trace metals (Ag, Cd, Cr, Cu, Fe, Mn, Ni, Pb, and Zn) were measured in LACSD sludge. Manganese was the only metal whose dissolved fraction exceeded one percent of the total metal. Sedimentation experiments for several dilutions of LACSD sludge in seawater showed that the sedimentation velocities of the sludge particles decreased as the dilution factor increased. A tenfold increase in dilution shifted the sedimentation velocity distribution by an order of magnitude. Chromium, Cu, Fe, Ni, Pb, and Zn were also followed during sedimentation. To a first approximation these metals behaved like the particles.

Solids and selected trace metals (Cr, Cu, Fe, Ni, Pb, and Zn) were monitored in oxic mixtures of both Hyperion and LACSD sludges for periods of 10 to 28 days. Less than 10 percent of the filterable solids dissolved or were oxidized. Only Ni was mobilized away from the particles. The majority of the mobilization was complete in less than one day.

The experimental data of this work were combined with oceanographic, biological, and geochemical information to propose and model the discharge of digested sludge to the San Pedro and Santa Monica Basins. A hydraulic computer simulation for a round buoyant jet in a density stratified medium showed that discharges of sludge effluent mixture at depths of 730 m would rise no more than 120 m. Initial jet mixing provided dilution estimates of 450 to 2600. Sedimentation analyses indicated that the solids would reach the sediments within 10 km of the point discharge.

Mass balances on the oxidizable chemical constituents in sludge indicated that the nearly anoxic waters of the basins would become wholly anoxic as a result of proposed discharges. From chemical-equilibrium computer modeling of the sludge digester and dilutions of sludge in anoxic seawater, it was predicted that the chemistry of all trace metals except Cr and Mn will be controlled by the precipitation of metal sulfide solids. This metal speciation held for dilutions up to 3000.

The net environmental impacts of this scheme should be salutary. The trace metals in the sludge should be immobilized in the anaerobic bottom sediments of the basins. Apparently no lifeforms higher than bacteria are there to be disrupted. The proposed deep-water discharges would remove the need for potentially expensive and energy-intensive land disposal alternatives and would end the discharge to the highly productive water near the ocean surface.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Laser interferometer gravitational wave observatory (LIGO) consists of two complex large-scale laser interferometers designed for direct detection of gravitational waves from distant astrophysical sources in the frequency range 10Hz - 5kHz. Direct detection of space-time ripples will support Einstein's general theory of relativity and provide invaluable information and new insight into physics of the Universe.

Initial phase of LIGO started in 2002, and since then data was collected during six science runs. Instrument sensitivity was improving from run to run due to the effort of commissioning team. Initial LIGO has reached designed sensitivity during the last science run, which ended in October 2010.

In parallel with commissioning and data analysis with the initial detector, LIGO group worked on research and development of the next generation detectors. Major instrument upgrade from initial to advanced LIGO started in 2010 and lasted till 2014.

This thesis describes results of commissioning work done at LIGO Livingston site from 2013 until 2015 in parallel with and after the installation of the instrument. This thesis also discusses new techniques and tools developed at the 40m prototype including adaptive filtering, estimation of quantization noise in digital filters and design of isolation kits for ground seismometers.

The first part of this thesis is devoted to the description of methods for bringing interferometer to the linear regime when collection of data becomes possible. States of longitudinal and angular controls of interferometer degrees of freedom during lock acquisition process and in low noise configuration are discussed in details.

Once interferometer is locked and transitioned to low noise regime, instrument produces astrophysics data that should be calibrated to units of meters or strain. The second part of this thesis describes online calibration technique set up in both observatories to monitor the quality of the collected data in real time. Sensitivity analysis was done to understand and eliminate noise sources of the instrument.

Coupling of noise sources to gravitational wave channel can be reduced if robust feedforward and optimal feedback control loops are implemented. The last part of this thesis describes static and adaptive feedforward noise cancellation techniques applied to Advanced LIGO interferometers and tested at the 40m prototype. Applications of optimal time domain feedback control techniques and estimators to aLIGO control loops are also discussed.

Commissioning work is still ongoing at the sites. First science run of advanced LIGO is planned for September 2015 and will last for 3-4 months. This run will be followed by a set of small instrument upgrades that will be installed on a time scale of few months. Second science run will start in spring 2016 and last for about 6 months. Since current sensitivity of advanced LIGO is already more than factor of 3 higher compared to initial detectors and keeps improving on a monthly basis, upcoming science runs have a good chance for the first direct detection of gravitational waves.