976 resultados para Dataset


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Integrated Environmental Monitoring (IEM) project, part of the Asia-Pacific Environmental Innovation Strategy (APEIS) project, developed an integrated environmental monitoring system that can be used to detect, monitor, and assess environmental disasters, degradation, and their impacts in the Asia-Pacific region. The system primarily employs data from the moderate resolution imaging spectrometer (MODIS) sensor on the Earth Observation System- (EOS-) Terra/Aqua satellite,as well as those from ground observations at five sites in different ecological systems in China. From the preliminary data analysis on both annual and daily variations of water, heat and CO2 fluxes, we can confirm that this system basically has been working well. The results show that both latent flux and CO2 flux are much greater in the crop field than those in the grassland and the saline desert, whereas the sensible heat flux shows the opposite trend. Different data products from MODIS have very different correspondence, e.g. MODIS-derived land surface temperature has a close correlation with measured ones, but LAI and NPP are quite different from ground measurements, which suggests that the algorithms used to process MODIS data need to be revised by using the local dataset. We are now using the APEIS-FLUX data to develop an integrated model, which can simulate the regional water,heat, and carbon fluxes. Finally, we are expected to use this model to develop more precise high-order MODIS products in Asia-Pacific region.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

一般说来,离群点是远离其他数据点的数据,但很可能包含着极其重要的信息.提出了一种新的离群模糊核聚类算法来发现样本集中的离群点.通过Mercer核把原来的数据空间映射到特征空间,并为特征空间的每个向量分配一个动态权值,在经典的FCM模糊聚类算法的基础上得到了一个特征空间内的全新的聚类目标函数,通过对目标函数的优化,最终得到了各个数据的权值,根据权值的大小标识出样本集中的离群点.仿真实验的结果表明了该离群模糊核聚类算法的可行性和有效性.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As the first arrival of seismic phase in deep seismic sounding, Pg is the important data for studying the attributes of the sedimentary layers and the shape of crystalline basement because of its high intensity and reliable detection. Conventionally, the sedimentary cover is expressed as isotropic, linear increasing model in the interpretation of Pg event. Actually, the sedimentary medium should be anisotropic as preferred cracks or fractures and thin layers are common features in the upper crust, so the interpretation of Pg event needs to be taken account of seismic velocity anisotropy. Traveltime calculation is the base of data processing and interpretation. Here, we only study the type of elliptical anisotropy for the poor quality and insufficiency of DSS data. In this thesis, we first investigate the meaning of elliptical anisotropy in the study of crustal structure and attribute, then derive Pg event’s traveltime-offset relationship by assuming a linear increasing velocity model with elliptical anisotropy and present the invert scheme from Pg traveltime-offset dataset to seismic velocity and its anisotropy of shallow crustal structure. We compare the Pg traveltime calculated by our analytic formula with numerical calculating method to test the accuracy. To get the lateral variation of elliptical anisotropy along the profiling, a tomography inversion method with the derived formula is presented, where the profile is divided into rectangles. Anisotropic imaging of crustal structure and attribute is efficient method for crust study. The imaging result can help us interprete the seismic data and discover the attribute of the rock to analyze the interaction between layers. Traveltime calculation is the base of image. Base on the ray tracing equations, the paper present a realization of three dimension of layer model with arbitrary anisotropic type and an example of Pg traveltime calculation in arbitrary anisotropic type is presented. The traveltime calculation method is complex and it only adapts to nonlinear inversion. Perturbation method of travel-time calculation in anisotropy is the linearization approach. It establishes the direct relation between seismic parameters and travetime and it is fit for inversion in anisotropic structural imaging. The thesis presents a P-wave imaging method of layer media for TTI. Southeastern China is an important part of the tectonic framework concerning the continental margin of eastern China and is commonly assumed to comprise the Yangtze block and the Cathaysia block, the two major tectonic units in the region. It’s a typical geological and geophysical zone. In this part, we fit the traveltime of Pg phase by the raytracing numerical method. But the method is not suitable here because the inefficiency of numerical method and the method itself. By the analytic method, we fit the Pg and Sg and get the lateral variation of elliptical anisotropy and then discuss its implication. The northeastern margin of Qinghai-Tibetan plateau is typical because it is the joint area of Eurasian plate and Indian plate and many strong earthquakes have occurred there in recent years.We use the Pg data to get elliptical anisotropic variation and discuss the possible meaning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An high-resolution prestack imaging technique of seismic data is developed in this thesis. By using this technique, the reflected coefficients of sheet sands can be gained in order to understand and identify thin oil reservoirs. One-way wave equation based migration methods can more accurately model seismic wave propagation effect such as multi-arrivals and obtain almost correct reflected energy in the presence of complex inhomogeneous media, and therefore, achieve more superiorities in imaging complex structure. So it is a good choice to apply the proposed high-resolution imaging to the presatck depth migration gathers. But one of the main shorting of one-way wave equation based migration methods is the low computational efficiency, thus the improvement on computational efficiency is first carried out. The method to improve the computational efficiency of prestack depth migration is first presented in this thesis, that is frequency-dependent varying-step depth exploration scheme plus a table-driven, one-point wavefield interpolation technology for wave equation based migration methods; The frequency-dependent varying-step depth exploration scheme reduces the computational cost of wavefield depth extrapolation, and the a table-driven, one-point wavefield interpolation technology reconstructs the extrapolated wavefield with an equal, desired vertical step with high computational efficiency. The proposed varying-step depth extrapolation plus one-point interpolation scheme results in 2/3 reduction in computational cost when compared to the equal-step depth extrapolation of wavefield, but gives the almost same imaging. The frequency-dependent varying-step depth exploration scheme is presented in theory by using the optimum split-step Fourier. But the proposed scheme can also be used by other wave equation based migration methods of the frequency domain. The proposed method is demonstrated by using impulse response, 2-D Marmousi dataset, 3-D salt dataset and the 3-D field dataset. A method of high-resolution prestack imaging is presented in the 2nd part of this thesis. The seismic interference method to solve the relative reflected coefficients is presented. The high-resolution imaging is obtained by introducing a sparseness- constrained least-square inversion into the reflected coefficient imaging. Gaussian regularization is first imposed and a smoothed solution is obtained by solving equation derived from the least-square inversion. Then the Cauchy regularization is introducing to the least-square inversion , the sparse solution of relative reflected coefficients can be obtained, that is high-resolution solution. The proposed scheme can be used together with other prestack imaging if the higher resolution is needed in a target zone. The seismic interference method in theory and the solution to sparseness-constrained least-square inversion are presented. The proposed method is demonstrated by synthetic examples and filed data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Solar ultraviolet (UV) radiation at wavelengths less than 400 nm is an important source of energy for aeronomic processes throughout the solar system. Solar UV photons are absorbed in planetary atmospheres, as well as throughout the heliosphere, via photodissociation of molecules, photoionization of molecules and atoms, and photoexcitation toexcitation including resonance scattering. In this paper, the solar irradiances data measured by TIMED SEE, as well as the solar proxies such as F10.7 and Mg II, thermosphere neutral density of CHAMP measurements and topside ionospheric plasmas densities from DMSP, are used to analyze solar irradiance effects on the variabilities of the thermosphere and the ionosphere. First, thermosphere densities near 410 km altitude are analyzed for solar irradiance variability effects during the period 2002-2004. Correlations between the densities and the solar irradiances for different spectral lines and wavelength ranges reveal significantly different characteristics. The density correlates remarkably well with all the selected solar irradiances except the lower chromospheric O I (130.4 nm) emission. Among the chosen solar proxies, the Mg II core-to-wing ratio index, EUV (30-120 nm) and F10.7 show the highest correlations with the density for short-term (< ~27 days) variations. For both long- (> ~27 days) and short-term variations, linear correlation coefficients exhibit a decreasing trend from low latitudes towards high latitudes. The density variability can be effectively modeled (capturing 71% of the variance) using multiple solar irradiance indices, including F10.7, SEUV (the EUV 30-120 nm index), and SFUV (the FUV 120-193 nm index), in which a lag time of 1 day was used for both F10.7 and SEUV, and 5 days for SFUV. In our regression formulation SEUV has the largest contribution to the density variation (40%), with the F10.7 having the next largest contribution (32%) and SFUV accounting for the rest (28%). Furthermore, a pronounced period of about 27.2 days (mean period of the Sun's rotation) is present in both density and solar irradiance data of 2003 and 2004, and a pronounced period of about 54.4 days (doubled period of the solar rotation) is also revealed in 2004. However, soft X-ray and FUV irradiances did not present a pronounced 54.4 day period in 2004, in spite of their high correlation with the densities. The Ap index also shows 54-day periodicities in 2004, and magnetic activity, together with solar irradiance, affects the 54-day variation in density significantly. In addition, NRLMSISE00, DTM-2000 and JB2006 model predictions are compared with density measurements from CHAMP to assess their accuracy, and the results show that these models underestimate the response of the thermosphere to variations induced by solar rotation. Next, the equatorial topside ionospheric plasmas densities Ni are analyzed for solar irradiance variability effects during the period 2002-2005. Linear correlations between Ni and the solar irradiances for different wavelength ranges reveal significantly different characteristics. XUV (0-35 nm) and EUV (115-130 nm) show higher correlation with Ni for the long-term variations, whereas EUV (35-115 nm) show higher correlation for the short-term variations. Moreover, partial correlation analysis shows that the long-term variations of Ni are affected by both XUV (0-35 nm) and EUV (35-115 nm), whereas XUV (0-35 nm) play a more important role; the short-term variations of Ni are mostly affected by EUV (35-115 nm). Furthermore, a pronounced period of about 27 days is present in both Ni and solar irradiance data of 2003 and 2004, and a pronounced period of about 54 days is also revealed in 2004. Finally, prompted by previous studies that have suggested solar EUV radiation as a means of driving the semiannual variation, we investigate the intra-annual variation in thermosphere neutral density near 400 km during 2002-2005. The intra-annual variation, commonly referred to as the ‘semiannual variation’, is characterized by significant latitude structure, hemispheric asymmetries, and inter-annual variability. The magnitude of the maximum yearly difference, from the yearly minimum to the yearly maximum, varies by as much as 60% from year to year, and the phases of the minima and maxima also change by 20-40 days from year to year. Each annual harmonic of the intra-annual variation, namely, annual, semiannual, ter-annual and quatra-annual, exhibits a decreasing trend from 2002 through 2005 that is correlated with the decline in solar activity. In addition, some variations in these harmonics are correlated with geomagnetic activity, as represented by the daily mean value of Kp. Recent empirical models of the thermosphere are found to be deficient in capturing most of the latitude dependencies discovered in our data. In addition, the solar flux and geomagnetic activity proxies that we have employed do not capture some latitude and inter-annual variations detected in our data. It is possible that these variations are partly due to other effects, such as seasonal-latitudinal variations in turbopause altitude (and hence O/N2 composition) and ionosphere coupling processes that remain to be discovered in the context of influencing the intra-annual variations depicted here. Our results provide a new dataset to challenge and validate thermosphere-ionosphere general circulation models that seek to delineate the thermosphere intra-annual variation and to understand the various competing mechanisms that may contribute to its existence and variability. We furthermore suggest that the term “intra-annual” variation be adopted to describe the variability in thermosphere and ionosphere parameters that is well-captured through a superposition of annual, semiannual, ter-annual, and quatra-annual harmonic terms, and that “semiannual’ be used strictly in reference to a pure 6-monthly sinusoidal variation. Moreover, we propose the term “intra-seasonal” to refer to those shorter-term variations that arise as residuals from the above Fourier representation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

At present, in order to image complex structures more accurately, the seismic migration methods has been developed from isotropic media to the anisotropic media. This dissertation develops a prestack time migration algorithm and application aspects for complex structures systematically. In transversely isotropic media with a vertical symmetry axis (VTI media), the dissertation starts from the theory that the prestack time migration is an approximation of the prestack depth migration, based on the one way wave equation and VTI time migration dispersion relation, by combining the stationary-phase theory gives a wave equation based VTI prestack time migration algorithm. Based on this algorithm, we can analytically obtain the travel time and amplitude expression in VTI media, as while conclude how the anisotropic parameter influence the time migration, and by analyzing the normal moveout of the far offset seismic data and lateral inhomogeneity of velocity, we can update the velocity model and estimate the anisotropic parameter model through the time migration. When anisotropic parameter is zero, this algorithm degenerates to the isotropic time migration algorithm naturally, so we can propose an isotopic processing procedure for imaging. This procedure may keep the main character of time migration such as high computational efficiency and velocity estimation through the migration, and, additionally, partially compensate the geometric divergence by adopting the deconvolution imaging condition of wave equation migration. Application of this algorithm to the complicated synthetic dataset and field data demonstrates the effectiveness of the approach. In the dissertation we also present an approach for estimating the velocity model and anisotropic parameter model. After analyzing the velocity and anisotropic parameter impaction on the time migration, and based on the normal moveout of the far offset seismic data and lateral inhomogeneity of velocity, through migration we can update the velocity model and estimate the anisotropic parameter model by combining the advantages of velocity analysis in isotropic media and anisotropic parameter estimation in VTI media. Testing on the synthetic and field data, demonstrates the method is effective and very steady. Massive synthetic dataset、2D sea dataset and 3D field datasets are used for VTI prestack time migration and compared to the stacked section after NMO and prestack isotropic time migration stacked section to demonstrate that VTI prestack time migration method in this paper can obtain better focusing and less positioning errors of complicated dip reflectors. When subsurface is more complex, primaries and multiples could not be separated in the Radon domain because they can no longer be described with simple functions (parabolic). We propose an attenuating multiple method in the image domain to resolve this problem. For a given velocity model,since time migration takes the complex structures wavefield propagation in to account, primaries and multiples have different offset-domain moveout discrepancies, then can be separated using techniques similar to the prior migration with Radon transform. Since every individual offset-domain common-reflection point gather incorporates complex 3D propagation effects, our method has the advantage of working with 3D data and complicated geology. Testing on synthetic and real data, we demonstrate the power of the method in discriminating between primaries and multiples after prestack time migration, and multiples can be attenuated in the image space considerably.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The sandstorms in 2001 were numerically simulated with NARCM that is a dust emission and transport model developed by Meteorological Service of Canada. In this paper, the dataset of NARCM model is processed and analyzed. The results of processing and analyzing show fair images about influence ranges and transport routes of sandstorms in 2001. The outcomes are compared with aerosol concentrations of atmosphere over Beijing, China and Tango, Japan. It confirms that sandstorm occurs when AK TK K and Si concentration in the air increases. It can be concluded that the NARCM model is appropriate for modeling sandstorm in North of China. The processing and analyzing show that the dust is produced and transported in the Otindag and Bashang. So the Otindag and Bashang are parts of source areas of sandstorms in East Asia. Another focus of this study is the REE of aeolian sediments in Otindag、Bashang、Tianmo Badain Jara、Hulunbeier and Kalahali, South Africa. The analysis on REE shows: There is clear distinction in HREE and LREE's Fractionation Degree (HLFD) between the deserts. HLFD is very high in Hulunbeier, with a value of (La/Lu)N 16.0. The value of (La/Lu)N is 12.7 inTianmo and 8.1 in Octindag. The HREE's Fractionation Degree(HFD) is about 4.0, quite similar in all samples. (3) The LREE's Fractionation Degree(LFD) varies slightly, from 1.5(Badain Jaran) to 2.3(Tianmo).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A continuous long (224m) and high-resolution core TY2 was recovered from paleo-maar-lake Tianyang, tropical South China. Based on the diatom records of the upper 130-m core, this paper focuses on exploring climate change and the lake evolution history in tropical South China during the past 240ka. The most typical and unique characteristics of the diatom assemblages is that, Aulacoseira granulata was dominant or absolutely dominant species (80-90%) during most parts of the 130-m core, while Cyclotella stelligera var. tenuis and Fragilaria construens var. venter were subdominant species in only limited parts of the lower and upper core, respectively. Time scale is always the biggest problem for the study of TY2 core, so although diatom is seldom used for establishing time scale, here we attempt this by correlating the diatom-reconstructed temperature sequence with the time scale of ODP core 806B from Equatorial Western Pacific. Verified by the few most reliable ages from TY2 core and the parallel core TYl, a rather reasonable and reliable time scale was established. 01S 7/6 falls at the depth of 100m (ca. 194kaBP), OIS 6/5 at 75m (ca. 132kaBP), OIS 5/4 at 46m (ca. 75kaBP), OIS 4-3 at 35m (ca. 60kaBP). Qualitative and quantitative environmental reconstructions are made on the basis of diatom assemblage ecotype and EDDI dataset. Correlation of diatom-reconstructed temperature and moisture changes of Core TY2 with pollen-reconstructed temperature and rainfall sequence of Core TYl proves that the results are quite consistent in most periods. Thus the reconstruction results from diatom are quite reliable, and probably have a much higher resolution than pollen results. Combined with lithological and magnetic susceptibility variations, the diatom analysis reveals that, the general climate in tropical South China during the past 240ka was warm and wet. On the time scale of glacial-interglacial, warm and wet, cool and dry are not always synchronous. It was relatively warm-wet during the penultimate interglacial, cool-dry during the penultimate glacial, warm-dry during the last interglacial, and cooler-drier during the last glacial. In contrast, on the time scale of subglacial-subinterglacial scales, warm and dry, cool and wet corresponds very obviously, showing very clear 21-23 ka precession cycle. Analysis also shows that, the water of Tianyang paleo-maar-lake was generally warm, turbulent, turbid, meso-trophic, slightly alkaline, low conductivity and fresh during the past 240 kaBP, with small variations in some parts. Tianyang paleolake experienced shallow to semi-deep lake in OIS7d, open shallow lake in OIS7c-OIS5b, shallow coastal lake in OIS5a-OIS4c, swamp in OIS4b, and then completely dried up in OIS3c. The lake evolution was mainly controlled by temperature and precipitation changes in tropical China. While temperature and precipitation changes were probably controlled by the migrations of monsoon rainband and the evaporation rate, which was in turn controlled by the evolution of East Asian monsoon. Therefore, when the summer monsoon was strongest the climate was warm-dry, when stronger the climate was warm-wet; when the winter monsoon was strongest the climate was cool-dry, stronger cool-wet. This mechanism caused the warm-dry sub interglacial and cool-wet subglacial climate in the tropical South China.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reflectivity sequences extraction is a key part of impedance inversion in seismic exploration. Although many valid inversion methods exist, with crosswell seismic data, the frequency brand of seismic data can not be broadened to satisfy the practical need. It is an urgent problem to be solved. Pre-stack depth migration which developed in these years becomes more and more robust in the exploration. It is a powerful technology of imaging to the geological object with complex structure and its final result is reflectivity imaging. Based on the reflectivity imaging of crosswell seismic data and wave equation, this paper completed such works as follows: Completes the workflow of blind deconvolution, Cauchy criteria is used to regulate the inversion(sparse inversion). Also the precondition conjugate gradient(PCG) based on Krylov subspace is combined with to decrease the computation, improves the speed, and the transition matrix is not necessary anymore be positive and symmetric. This method is used to the high frequency recovery of crosswell seismic section and the result is satisfactory. Application of rotation transform and viterbi algorithm in the preprocess of equation prestack depth migration. In equation prestack depth migration, the grid of seismic dataset is required to be regular. Due to the influence of complex terrain and fold, the acquisition geometry sometimes becomes irregular. At the same time, to avoid the aliasing produced by the sparse sample along the on-line, interpolation should be done between tracks. In this paper, I use the rotation transform to make on-line run parallel with the coordinate, and also use the viterbi algorithm to complete the automatic picking of events, the result is satisfactory. 1. Imaging is a key part of pre-stack depth migration besides extrapolation. Imaging condition can influence the final result of reflectivity sequences imaging greatly however accurate the extrapolation operator is. The author does migration of Marmousi under different imaging conditions. And analyzes these methods according to the results. The results of computation show that imaging condition which stabilize source wave field and the least-squares estimation imaging condition in this paper are better than the conventional correlation imaging condition. The traditional pattern of "distributed computing and mass decision" is wisely adopted in the field of seismic data processing and becoming an obstacle of the promoting of the enterprise management level. Thus at the end of this paper, a systemic solution scheme, which employs the mode of "distributed computing - centralized storage - instant release", is brought forward, based on the combination of C/S and B/S release models. The architecture of the solution, the corresponding web technology and the client software are introduced. The application shows that the validity of this scheme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dissertation addressed the problems of signals reconstruction and data restoration in seismic data processing, which takes the representation methods of signal as the main clue, and take the seismic information reconstruction (signals separation and trace interpolation) as the core. On the natural bases signal representation, I present the ICA fundamentals, algorithms and its original applications to nature earth quake signals separation and survey seismic signals separation. On determinative bases signal representation, the paper proposed seismic dada reconstruction least square inversion regularization methods, sparseness constraints, pre-conditioned conjugate gradient methods, and their applications to seismic de-convolution, Radon transformation, et. al. The core contents are about de-alias uneven seismic data reconstruction algorithm and its application to seismic interpolation. Although the dissertation discussed two cases of signal representation, they can be integrated into one frame, because they both deal with the signals or information restoration, the former reconstructing original signals from mixed signals, the later reconstructing whole data from sparse or irregular data. The goal of them is same to provide pre-processing methods and post-processing method for seismic pre-stack depth migration. ICA can separate the original signals from mixed signals by them, or abstract the basic structure from analyzed data. I surveyed the fundamental, algorithms and applications of ICA. Compared with KL transformation, I proposed the independent components transformation concept (ICT). On basis of the ne-entropy measurement of independence, I implemented the FastICA and improved it by covariance matrix. By analyzing the characteristics of the seismic signals, I introduced ICA into seismic signal processing firstly in Geophysical community, and implemented the noise separation from seismic signal. Synthetic and real data examples show the usability of ICA to seismic signal processing and initial effects are achieved. The application of ICA to separation quake conversion wave from multiple in sedimentary area is made, which demonstrates good effects, so more reasonable interpretation of underground un-continuity is got. The results show the perspective of application of ICA to Geophysical signal processing. By virtue of the relationship between ICA and Blind Deconvolution , I surveyed the seismic blind deconvolution, and discussed the perspective of applying ICA to seismic blind deconvolution with two possible solutions. The relationship of PC A, ICA and wavelet transform is claimed. It is proved that reconstruction of wavelet prototype functions is Lie group representation. By the way, over-sampled wavelet transform is proposed to enhance the seismic data resolution, which is validated by numerical examples. The key of pre-stack depth migration is the regularization of pre-stack seismic data. As a main procedure, seismic interpolation and missing data reconstruction are necessary. Firstly, I review the seismic imaging methods in order to argue the critical effect of regularization. By review of the seismic interpolation algorithms, I acclaim that de-alias uneven data reconstruction is still a challenge. The fundamental of seismic reconstruction is discussed firstly. Then sparseness constraint on least square inversion and preconditioned conjugate gradient solver are studied and implemented. Choosing constraint item with Cauchy distribution, I programmed PCG algorithm and implement sparse seismic deconvolution, high resolution Radon Transformation by PCG, which is prepared for seismic data reconstruction. About seismic interpolation, dealias even data interpolation and uneven data reconstruction are very good respectively, however they can not be combined each other. In this paper, a novel Fourier transform based method and a algorithm have been proposed, which could reconstruct both uneven and alias seismic data. I formulated band-limited data reconstruction as minimum norm least squares inversion problem where an adaptive DFT-weighted norm regularization term is used. The inverse problem is solved by pre-conditional conjugate gradient method, which makes the solutions stable and convergent quickly. Based on the assumption that seismic data are consisted of finite linear events, from sampling theorem, alias events can be attenuated via LS weight predicted linearly from low frequency. Three application issues are discussed on even gap trace interpolation, uneven gap filling, high frequency trace reconstruction from low frequency data trace constrained by few high frequency traces. Both synthetic and real data numerical examples show the proposed method is valid, efficient and applicable. The research is valuable to seismic data regularization and cross well seismic. To meet 3D shot profile depth migration request for data, schemes must be taken to make the data even and fitting the velocity dataset. The methods of this paper are used to interpolate and extrapolate the shot gathers instead of simply embedding zero traces. So, the aperture of migration is enlarged and the migration effect is improved. The results show the effectiveness and the practicability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present an image-based approach to infer 3D structure parameters using a probabilistic "shape+structure'' model. The 3D shape of a class of objects may be represented by sets of contours from silhouette views simultaneously observed from multiple calibrated cameras. Bayesian reconstructions of new shapes can then be estimated using a prior density constructed with a mixture model and probabilistic principal components analysis. We augment the shape model to incorporate structural features of interest; novel examples with missing structure parameters may then be reconstructed to obtain estimates of these parameters. Model matching and parameter inference are done entirely in the image domain and require no explicit 3D construction. Our shape model enables accurate estimation of structure despite segmentation errors or missing views in the input silhouettes, and works even with only a single input view. Using a dataset of thousands of pedestrian images generated from a synthetic model, we can perform accurate inference of the 3D locations of 19 joints on the body based on observed silhouette contours from real images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

P-glycoprotein (P-gp), an ATP-binding cassette (ABC) transporter, functions as a biological barrier by extruding cytotoxic agents out of cells, resulting in an obstacle in chemotherapeutic treatment of cancer. In order to aid in the development of potential P-gp inhibitors, we constructed a quantitative structure-activity relationship (QSAR) model of flavonoids as P-gp inhibitors based on Bayesian-regularized neural network (BRNN). A dataset of 57 flavonoids collected from a literature binding to the C-terminal nucleotide-binding domain of mouse P-gp was compiled. The predictive ability of the model was assessed using a test set that was independent of the training set, which showed a standard error of prediction of 0.146 +/- 0.006 (data scaled from 0 to 1). Meanwhile, two other mathematical tools, back-propagation neural network (BPNN) and partial least squares (PLS) were also attempted to build QSAR models. The BRNN provided slightly better results for the test set compared to BPNN, but the difference was not significant according to F-statistic at p = 0.05. The PLS failed to build a reliable model in the present study. Our study indicates that the BRNN-based in silico model has good potential in facilitating the prediction of P-gp flavonoid inhibitors and might be applied in further drug design.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Malicious software (malware) have significantly increased in terms of number and effectiveness during the past years. Until 2006, such software were mostly used to disrupt network infrastructures or to show coders’ skills. Nowadays, malware constitute a very important source of economical profit, and are very difficult to detect. Thousands of novel variants are released every day, and modern obfuscation techniques are used to ensure that signature-based anti-malware systems are not able to detect such threats. This tendency has also appeared on mobile devices, with Android being the most targeted platform. To counteract this phenomenon, a lot of approaches have been developed by the scientific community that attempt to increase the resilience of anti-malware systems. Most of these approaches rely on machine learning, and have become very popular also in commercial applications. However, attackers are now knowledgeable about these systems, and have started preparing their countermeasures. This has lead to an arms race between attackers and developers. Novel systems are progressively built to tackle the attacks that get more and more sophisticated. For this reason, a necessity grows for the developers to anticipate the attackers’ moves. This means that defense systems should be built proactively, i.e., by introducing some security design principles in their development. The main goal of this work is showing that such proactive approach can be employed on a number of case studies. To do so, I adopted a global methodology that can be divided in two steps. First, understanding what are the vulnerabilities of current state-of-the-art systems (this anticipates the attacker’s moves). Then, developing novel systems that are robust to these attacks, or suggesting research guidelines with which current systems can be improved. This work presents two main case studies, concerning the detection of PDF and Android malware. The idea is showing that a proactive approach can be applied both on the X86 and mobile world. The contributions provided on this two case studies are multifolded. With respect to PDF files, I first develop novel attacks that can empirically and optimally evade current state-of-the-art detectors. Then, I propose possible solutions with which it is possible to increase the robustness of such detectors against known and novel attacks. With respect to the Android case study, I first show how current signature-based tools and academically developed systems are weak against empirical obfuscation attacks, which can be easily employed without particular knowledge of the targeted systems. Then, I examine a possible strategy to build a machine learning detector that is robust against both empirical obfuscation and optimal attacks. Finally, I will show how proactive approaches can be also employed to develop systems that are not aimed at detecting malware, such as mobile fingerprinting systems. In particular, I propose a methodology to build a powerful mobile fingerprinting system, and examine possible attacks with which users might be able to evade it, thus preserving their privacy. To provide the aforementioned contributions, I co-developed (with the cooperation of the researchers at PRALab and Ruhr-Universität Bochum) various systems: a library to perform optimal attacks against machine learning systems (AdversariaLib), a framework for automatically obfuscating Android applications, a system to the robust detection of Javascript malware inside PDF files (LuxOR), a robust machine learning system to the detection of Android malware, and a system to fingerprint mobile devices. I also contributed to develop Android PRAGuard, a dataset containing a lot of empirical obfuscation attacks against the Android platform. Finally, I entirely developed Slayer NEO, an evolution of a previous system to the detection of PDF malware. The results attained by using the aforementioned tools show that it is possible to proactively build systems that predict possible evasion attacks. This suggests that a proactive approach is crucial to build systems that provide concrete security against general and evasion attacks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The identification of subject-specific traits extracted from patterns of brain activity still represents an important challenge. The need to detect distinctive brain features, which is relevant for biometric and brain computer interface systems, has been also emphasized in monitoring the effect of clinical treatments and in evaluating the progression of brain disorders. Graph theory and network science tools have revealed fundamental mechanisms of functional brain organization in resting-state M/EEG analysis. Nevertheless, it is still not clearly understood how several methodological aspects may bias the topology of the reconstructed functional networks. In this context, the literature shows inconsistency in the chosen length of the selected epochs, impeding a meaningful comparison between results from different studies. In this study we propose an approach which aims to investigate the existence of a distinctive functional core (sub-network) using an unbiased reconstruction of network topology. Brain signals from a public and freely available EEG dataset were analyzed using a phase synchronization based measure, minimum spanning tree and k-core decomposition. The analysis was performed for each classical brain rhythm separately. Furthermore, we aim to provide a network approach insensitive to the effects that epoch length has on functional connectivity (FC) and network reconstruction. Two different measures, the phase lag index (PLI) and the Amplitude Envelope Correlation (AEC), were applied to EEG resting-state recordings for a group of eighteen healthy volunteers. Weighted clustering coefficient (CCw), weighted characteristic path length (Lw) and minimum spanning tree (MST) parameters were computed to evaluate the network topology. The analysis was performed on both scalp and source-space data. Results about distinctive functional core, show highest classification rates from k-core decomposition in gamma (EER=0.130, AUC=0.943) and high beta (EER=0.172, AUC=0.905) frequency bands. Results from scalp analysis concerning the influence of epoch length, show a decrease in both mean PLI and AEC values with an increase in epoch length, with a tendency to stabilize at a length of 12 seconds for PLI and 6 seconds for AEC. Moreover, CCw and Lw show very similar behaviour, with metrics based on AEC more reliable in terms of stability. In general, MST parameters stabilize at short epoch lengths, particularly for MSTs based on PLI (1-6 seconds versus 4-8 seconds for AEC). At the source-level the results were even more reliable, with stability already at 1 second duration for PLI-based MSTs. Our results confirm that EEG analysis may represent an effective tool to identify subject-specific characteristics that may be of great impact for several bioengineering applications. Regarding epoch length, the present work suggests that both PLI and AEC depend on epoch length and that this has an impact on the reconstructed network topology, particularly at the scalp-level. Source-level MST topology is less sensitive to differences in epoch length, therefore enabling the comparison of brain network topology between different studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Information Systems for complex situations often fail to adequately deliver quality and suitability. One reason for this failure is an inability to identify comprehensive user requirements. Seldom do all stakeholders, especially those "invisible‟ or "back room‟ system users, have a voice when systems are designed. If this is a global problem then it may impact on both the public and private sectors in terms of their ability to perform, produce and stay competitive. To improve upon this, system designers use rich pictures as a diagrammatic means of identifying differing world views with the aim of creating shared understanding of the organisation. Rich pictures have predominantly been used as freeform, unstructured tools with no commonly agreed syntax. This research has collated, analysed and documented a substantial collection of rich pictures into a single dataset. Attention has been focussed on three main research areas; how the rich picture is facilitated, how the rich picture is constructed and how to interpret the resultant pictures. This research highlights the importance of the rich picture tool and argues the value of adding levels of structure, in certain cases. It is shown that there are considerable benefits for both the interpreter and the creator by providing a pre-drawing session, a common key of symbols and a framework for icon understanding. In conclusion, it is suggested that there is some evidence that a framework which aims to support the process of the rich picture and aid interpretation is valuable.