941 resultados para Algorithms, Properties, the KCube Graphs
Resumo:
A solution of the lidar equation is discussed, that permits combining backscatter and depolarization measurements to quantitatively distinguish two different aerosol types with different depolarization properties. The method has been successfully applied to simultaneous observations of volcanic ash and boundary layer aerosol obtained in Exeter, United Kingdom, on 16 and 18 April 2010, permitting the contribution of the two aerosols to be quantified separately. First a subset of the atmospheric profiles is used where the two aerosol types belong to clearly distinguished layers, for the purpose of characterizing the ash in terms of lidar ratio and depolarization. These quantities are then used in a three‐component atmosphere solution scheme of the lidar equation applied to the full data set, in order to compute the optical properties of both aerosol types separately. On 16 April a thin ash layer, 100–400 m deep, is observed (average and maximum estimated ash optical depth: 0.11 and 0.2); it descends from ∼2800 to ∼1400 m altitude over a 6‐hour period. On 18 April a double ash layer, ∼400 m deep, is observed just above the morning boundary layer (average and maximum estimated ash optical depth: 0.19 and 0.27). In the afternoon the ash is entrained into the boundary layer, and the latter reaches a depth of ∼1800 m (average and maximum estimated ash optical depth: 0.1 and 0.15). An additional ash layer, with a very small optical depth, was observed on 18 April at an altitude of 3500–4000 m. By converting the lidar optical measurements using estimates of volcanic ash specific extinction, derived from other works, the observations seem to suggest approximate peak ash concentrations of ∼1500 and ∼1000 mg/m3,respectively, on the two observations dates.
Resumo:
The optical microstructures of thin sections of two liquid crystalline polymers are examined in the polarizing microscope. The polymers are random copolyesters based on hydroxybenzoic and hydroxynaphthoic acids (B-N), and hydroxybenzoic acid and ethylene terephthalate (B-ET). Sections cut from oriented samples, so as to include the extrusion direction, show microstructures in which there is no apparent preferred orientation of the axes describing the local optical anisotropy. The absence of preferred orientation in the microstructure, despite marked axial alignment of molecular chain segments as demonstrated by X-Ray diffraction, is interpreted in terms of the polymer having biaxial optical properties. The implication of optical biaxiality is that, although the mesophases are nematic, the orientation of the molecules is correlated about three (orthogonal) axes over distances greater than a micron. The structure is classified as a multiaxial nematic.
Resumo:
Accurate observations of cloud microphysical properties are needed for evaluating and improving the representation of cloud processes in climate models and better estimate of the Earth radiative budget. However, large differences are found in current cloud products retrieved from ground-based remote sensing measurements using various retrieval algorithms. Understanding the differences is an important step to address uncertainties in the cloud retrievals. In this study, an in-depth analysis of nine existing ground-based cloud retrievals using ARM remote sensing measurements is carried out. We place emphasis on boundary layer overcast clouds and high level ice clouds, which are the focus of many current retrieval development efforts due to their radiative importance and relatively simple structure. Large systematic discrepancies in cloud microphysical properties are found in these two types of clouds among the nine cloud retrieval products, particularly for the cloud liquid and ice particle effective radius. Note that the differences among some retrieval products are even larger than the prescribed uncertainties reported by the retrieval algorithm developers. It is shown that most of these large differences have their roots in the retrieval theoretical bases, assumptions, as well as input and constraint parameters. This study suggests the need to further validate current retrieval theories and assumptions and even the development of new retrieval algorithms with more observations under different cloud regimes.
Resumo:
Although it is known to be a rich source of the putative anti-cancer chemicals isothiocyanates, watercress has not been extensively studied for its cancer preventing properties. The aim of this study was to investigate the potential chemoprotective effects of crude watercress extract toward three important stages in the carcinogenic process, namely initiation, proliferation, and metastasis (invasion) using established in vitro models. HT29 cells were used to investigate the protective effects of the extract on DNA damage and the cell cycle. The extract was not genotoxic but inhibited DNA damage induced by two of the three genotoxins used, namely hydrogen peroxide and fecal water, indicating the potential to inhibit initiation. It also caused an accumulation of cells in the S phase of the cell cycle indicating (possible) cell cycle delay at this stage. The extract was shown to significantly inhibit invasion of HT115 cells through matrigel. Component analysis was also carried out in an attempt to determine the major phytochemicals present in both watercress leaves and the crude extract. In conclusion, the watercress extract proved to be significantly protective against the three stages of the carcinogenesis process investigated.
Resumo:
In order to gain knowledge from large databases, scalable data mining technologies are needed. Data are captured on a large scale and thus databases are increasing at a fast pace. This leads to the utilisation of parallel computing technologies in order to cope with large amounts of data. In the area of classification rule induction, parallelisation of classification rules has focused on the divide and conquer approach, also known as the Top Down Induction of Decision Trees (TDIDT). An alternative approach to classification rule induction is separate and conquer which has only recently been in the focus of parallelisation. This work introduces and evaluates empirically a framework for the parallel induction of classification rules, generated by members of the Prism family of algorithms. All members of the Prism family of algorithms follow the separate and conquer approach.
Resumo:
With the fast development of the Internet, wireless communications and semiconductor devices, home networking has received significant attention. Consumer products can collect and transmit various types of data in the home environment. Typical consumer sensors are often equipped with tiny, irreplaceable batteries and it therefore of the utmost importance to design energy efficient algorithms to prolong the home network lifetime and reduce devices going to landfill. Sink mobility is an important technique to improve home network performance including energy consumption, lifetime and end-to-end delay. Also, it can largely mitigate the hot spots near the sink node. The selection of optimal moving trajectory for sink node(s) is an NP-hard problem jointly optimizing routing algorithms with the mobile sink moving strategy is a significant and challenging research issue. The influence of multiple static sink nodes on energy consumption under different scale networks is first studied and an Energy-efficient Multi-sink Clustering Algorithm (EMCA) is proposed and tested. Then, the influence of mobile sink velocity, position and number on network performance is studied and a Mobile-sink based Energy-efficient Clustering Algorithm (MECA) is proposed. Simulation results validate the performance of the proposed two algorithms which can be deployed in a consumer home network environment.
Resumo:
The variability of results from different automated methods of detection and tracking of extratropical cyclones is assessed in order to identify uncertainties related to the choice of method. Fifteen international teams applied their own algorithms to the same dataset—the period 1989–2009 of interim European Centre for Medium-Range Weather Forecasts (ECMWF) Re-Analysis (ERAInterim) data. This experiment is part of the community project Intercomparison of Mid Latitude Storm Diagnostics (IMILAST; see www.proclim.ch/imilast/index.html). The spread of results for cyclone frequency, intensity, life cycle, and track location is presented to illustrate the impact of using different methods. Globally, methods agree well for geographical distribution in large oceanic regions, interannual variability of cyclone numbers, geographical patterns of strong trends, and distribution shape for many life cycle characteristics. In contrast, the largest disparities exist for the total numbers of cyclones, the detection of weak cyclones, and distribution in some densely populated regions. Consistency between methods is better for strong cyclones than for shallow ones. Two case studies of relatively large, intense cyclones reveal that the identification of the most intense part of the life cycle of these events is robust between methods, but considerable differences exist during the development and the dissolution phases.
Resumo:
For a Lévy process ξ=(ξt)t≥0 drifting to −∞, we define the so-called exponential functional as follows: Formula Under mild conditions on ξ, we show that the following factorization of exponential functionals: Formula holds, where × stands for the product of independent random variables, H− is the descending ladder height process of ξ and Y is a spectrally positive Lévy process with a negative mean constructed from its ascending ladder height process. As a by-product, we generate an integral or power series representation for the law of Iξ for a large class of Lévy processes with two-sided jumps and also derive some new distributional properties. The proof of our main result relies on a fine Markovian study of a class of generalized Ornstein–Uhlenbeck processes, which is itself of independent interest. We use and refine an alternative approach of studying the stationary measure of a Markov process which avoids some technicalities and difficulties that appear in the classical method of employing the generator of the dual Markov process.
Resumo:
Most of the operational Sea Surface Temperature (SST) products derived from satellite infrared radiometry use multi-spectral algorithms. They show, in general, reasonable performances with root mean square (RMS) residuals around 0.5 K when validated against buoy measurements, but have limitations, particularly a component of the retrieval error that relates to such algorithms' limited ability to cope with the full variability of atmospheric absorption and emission. We propose to use forecast atmospheric profiles and a radiative transfer model to simulate the algorithmic errors of multi-spectral algorithms. In the practical case of SST derived from the Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG), we demonstrate that simulated algorithmic errors do explain a significant component of the actual errors observed for the non linear (NL) split window algorithm in operational use at the Centre de Météorologie Spatiale (CMS). The simulated errors, used as correction terms, reduce significantly the regional biases of the NL algorithm as well as the standard deviation of the differences with drifting buoy measurements. The availability of atmospheric profiles associated with observed satellite-buoy differences allows us to analyze the origins of the main algorithmic errors observed in the SEVIRI field of view: a negative bias in the inter-tropical zone, and a mid-latitude positive bias. We demonstrate how these errors are explained by the sensitivity of observed brightness temperatures to the vertical distribution of water vapour, propagated through the SST retrieval algorithm.
Resumo:
In winter, brine rejection from sea ice formation and export in the Weddell Sea, offshore of Filchner-Ronne Ice Shelf (FRIS), leads to the formation of High Salinity Shelf Water (HSSW). This dense water mass enters the cavity beneath FRIS by sinking southward down the sloping continental shelf towards the grounding line. Melting occurs when the HSSW encounters the ice shelf, and the meltwater released cools and freshens the HSSW to form a water mass known as Ice Shelf Water (ISW). If this ISW rises, the ‘ice pump’ is initiated (Lewis and Perkin, 1986), whereby the ascending ISW becomes supercooled and deposits marine ice at shallower locations due to the pressure increase in the in-situ freezing temperature. Sandh¨ager et al. (2004) were able to infer the thickness patterns of marine ice deposits at the base of FRIS (figure 1), so the primary aim of this work is to try to understand the ocean flows that determine these patterns. The plume model we use to investigate ISW flow is described fully by Holland and Feltham (accepted) so only a relatively brief outline is presented here. The plume is simulated by combining a parameterisation of ice shelf basal interaction and a multiplesize- class frazil dynamics model with an unsteady, depth-averaged reduced-gravity plume model. In the model an active region of ISW evolves above and within an expanse of stagnant ambient fluid, which is considered to be ice-free and has fixed profiles of temperature and salinity. The two main assumptions of the model are that there is a well-mixed layer underneath the ice shelf and that the ambient fluid outside the plume is stagnant with fixed properties. The topography of the ice shelf that the plume flows beneath is set to the FRIS ice shelf draft calculated by Sandh¨ager et al. (2004) masked with the grounding line from the Antarctic Digital Database (ADD Consortium, 2002). To initiate the plumes, we assume that the intrusion of dense HSSW initially causes melting at the points on the grounding line where the glaciological tributaries feeding FRIS go afloat.
Resumo:
In this article, we review the state-of-the-art techniques in mining data streams for mobile and ubiquitous environments. We start the review with a concise background of data stream processing, presenting the building blocks for mining data streams. In a wide range of applications, data streams are required to be processed on small ubiquitous devices like smartphones and sensor devices. Mobile and ubiquitous data mining target these applications with tailored techniques and approaches addressing scarcity of resources and mobility issues. Two categories can be identified for mobile and ubiquitous mining of streaming data: single-node and distributed. This survey will cover both categories. Mining mobile and ubiquitous data require algorithms with the ability to monitor and adapt the working conditions to the available computational resources. We identify the key characteristics of these algorithms and present illustrative applications. Distributed data stream mining in the mobile environment is then discussed, presenting the Pocket Data Mining framework. Mobility of users stimulates the adoption of context-awareness in this area of research. Context-awareness and collaboration are discussed in the Collaborative Data Stream Mining, where agents share knowledge to learn adaptive accurate models.
Resumo:
In the concluding paper of this tetralogy, we here use the different geomagnetic activity indices to reconstruct the near-Earth interplanetary magnetic field (IMF) and solar wind flow speed, as well as the open solar flux (OSF) from 1845 to the present day. The differences in how the various indices vary with near-Earth interplanetary parameters, which are here exploited to separate the effects of the IMF and solar wind speed, are shown to be statistically significant at the 93% level or above. Reconstructions are made using four combinations of different indices, compiled using different data and different algorithms, and the results are almost identical for all parameters. The correction to the aa index required is discussed by comparison with the Ap index from a more extensive network of mid-latitude stations. Data from the Helsinki magnetometer station is used to extend the aa index back to 1845 and the results confirmed by comparison with the nearby St Petersburg observatory. The optimum variations, using all available long-term geomagnetic indices, of the near-Earth IMF and solar wind speed, and of the open solar flux, are presented; all with ±2sigma� uncertainties computed using the Monte Carlo technique outlined in the earlier papers. The open solar flux variation derived is shown to be very similar indeed to that obtained using the method of Lockwood et al. (1999).
Resumo:
A method has been developed to estimate Aerosol Optical Depth (AOD), Fine Mode Fraction (FMF) and Single Scattering Albedo (SSA) over land surfaces using simulated Sentinel-3 data. The method uses inversion of a coupled surface/atmosphere radiative transfer model, and includes a general physical model of angular surface reflectance. An iterative process is used to determine the optimum value of the aerosol properties providing the best fit of the corrected reflectance values for a number of view angles and wavelengths with those provided by the physical model. A method of estimating AOD using only angular retrieval has previously been demonstrated on data from the ENVISAT and PROBA-1 satellite instruments, and is extended here to the synergistic spectral and angular sampling of Sentinel-3 and the additional aerosol properties. The method is tested using hyperspectral, multi-angle Compact High Resolution Imaging Spectrometer (CHRIS) images. The values obtained from these CHRIS observations are validated using ground based sun-photometer measurements. Results from 22 image sets using the synergistic retrieval and improved aerosol models show an RMSE of 0.06 in AOD, reduced to 0.03 over vegetated targets.
Resumo:
The relations between the rheological and electrical properties of NaY zeolite electrorheological fluid and its solid phase are studied. It is found that then exist complex relations between its electrical and theological properties. The temperature spectra of dielectric properties of the fluid under high AC electric field are strongly field strength dependent. The relation between the DC conductivity of the fluid and the exciting electric field is experimentally presented as log sigma =A+BE1/2, when A is a strong function, but B, a very weak function of temperature. The shear stress of the fluid under a fixed electric field and temperature decreases with shear rate. A relaxation time for the adsorbed charges is estimated to be about 0.3 to 6.6 s in the temperature range from 280 to 380 K. The relaxation time qualitatively corresponds to the shear rate at which the shear stress begins to drop. The time dependent leaking current of the ER fluids under DC electric field is also measured. The conductivity increase is mainly caused by the structure evolution of particles. The experimental results can he explained with the calculations of Davis (J. Appl. Phys. 81(1997) pp.1985-1991) and Martin (J. Chem. Phys. 110(1999) pp.4854-4866). It is predicted that the NaY zeolite ER fluid strength would get degraded slowly.
Resumo:
This paper evaluates the current status of global modeling of the organic aerosol (OA) in the troposphere and analyzes the differences between models as well as between models and observations. Thirty-one global chemistry transport models (CTMs) and general circulation models (GCMs) have participated in this intercomparison, in the framework of AeroCom phase II. The simulation of OA varies greatly between models in terms of the magnitude of primary emissions, secondary OA (SOA) formation, the number of OA species used (2 to 62), the complexity of OA parameterizations (gas-particle partitioning, chemical aging, multiphase chemistry, aerosol microphysics), and the OA physical, chemical and optical properties. The diversity of the global OA simulation results has increased since earlier AeroCom experiments, mainly due to the increasing complexity of the SOA parameterization in models, and the implementation of new, highly uncertain, OA sources. Diversity of over one order of magnitude exists in the modeled vertical distribution of OA concentrations that deserves a dedicated future study. Furthermore, although the OA / OC ratio depends on OA sources and atmospheric processing, and is important for model evaluation against OA and OC observations, it is resolved only by a few global models. The median global primary OA (POA) source strength is 56 Tg a−1 (range 34–144 Tg a−1) and the median SOA source strength (natural and anthropogenic) is 19 Tg a−1 (range 13–121 Tg a−1). Among the models that take into account the semi-volatile SOA nature, the median source is calculated to be 51 Tg a−1 (range 16–121 Tg a−1), much larger than the median value of the models that calculate SOA in a more simplistic way (19 Tg a−1; range 13–20 Tg a−1, with one model at 37 Tg a−1). The median atmospheric burden of OA is 1.4 Tg (24 models in the range of 0.6–2.0 Tg and 4 between 2.0 and 3.8 Tg), with a median OA lifetime of 5.4 days (range 3.8–9.6 days). In models that reported both OA and sulfate burdens, the median value of the OA/sulfate burden ratio is calculated to be 0.77; 13 models calculate a ratio lower than 1, and 9 models higher than 1. For 26 models that reported OA deposition fluxes, the median wet removal is 70 Tg a−1 (range 28–209 Tg a−1), which is on average 85% of the total OA deposition. Fine aerosol organic carbon (OC) and OA observations from continuous monitoring networks and individual field campaigns have been used for model evaluation. At urban locations, the model–observation comparison indicates missing knowledge on anthropogenic OA sources, both strength and seasonality. The combined model–measurements analysis suggests the existence of increased OA levels during summer due to biogenic SOA formation over large areas of the USA that can be of the same order of magnitude as the POA, even at urban locations, and contribute to the measured urban seasonal pattern. Global models are able to simulate the high secondary character of OA observed in the atmosphere as a result of SOA formation and POA aging, although the amount of OA present in the atmosphere remains largely underestimated, with a mean normalized bias (MNB) equal to −0.62 (−0.51) based on the comparison against OC (OA) urban data of all models at the surface, −0.15 (+0.51) when compared with remote measurements, and −0.30 for marine locations with OC data. The mean temporal correlations across all stations are low when compared with OC (OA) measurements: 0.47 (0.52) for urban stations, 0.39 (0.37) for remote stations, and 0.25 for marine stations with OC data. The combination of high (negative) MNB and higher correlation at urban stations when compared with the low MNB and lower correlation at remote sites suggests that knowledge about the processes that govern aerosol processing, transport and removal, on top of their sources, is important at the remote stations. There is no clear change in model skill with increasing model complexity with regard to OC or OA mass concentration. However, the complexity is needed in models in order to distinguish between anthropogenic and natural OA as needed for climate mitigation, and to calculate the impact of OA on climate accurately.