35 resultados para alta risoluzione Trentino Alto Adige data-set climatologia temperatura giornaliera orografia complessa
Resumo:
While Histograms of Oriented Gradients (HOG) plus Support Vector Machine (SVM) (HOG+SVM) is the most successful human detection algorithm, it is time-consuming. This paper proposes two ways to deal with this problem. One way is to reuse the features in blocks to construct the HOG features for intersecting detection windows. Another way is to utilize sub-cell based interpolation to efficiently compute the HOG features for each block. The combination of the two ways results in significant increase in detecting humans-more than five times better. To evaluate the proposed method, we have established a top-view human database. Experimental results on the top-view database and the well-known INRIA data set have demonstrated the effectiveness and efficiency of the proposed method. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The Gaussian process latent variable model (GP-LVM) has been identified to be an effective probabilistic approach for dimensionality reduction because it can obtain a low-dimensional manifold of a data set in an unsupervised fashion. Consequently, the GP-LVM is insufficient for supervised learning tasks (e. g., classification and regression) because it ignores the class label information for dimensionality reduction. In this paper, a supervised GP-LVM is developed for supervised learning tasks, and the maximum a posteriori algorithm is introduced to estimate positions of all samples in the latent variable space. We present experimental evidences suggesting that the supervised GP-LVM is able to use the class label information effectively, and thus, it outperforms the GP-LVM and the discriminative extension of the GP-LVM consistently. The comparison with some supervised classification methods, such as Gaussian process classification and support vector machines, is also given to illustrate the advantage of the proposed method.
Resumo:
In chemistry for chemical analysis of a multi-component sample or quantitative structure-activity/property relationship (QSAR/QSPR) studies, variable selection is a key step. In this study, comparisons between different methods were performed. These methods include three classical methods such as forward selection, backward elimination and stepwise regression; orthogonal descriptors; leaps-and-bounds regression and genetic algorithm. Thirty-five nitrobenzenes were taken as the data set. From these structures quantum chemical parameters, topological indices and indicator variable were extracted as the descriptors for the comparisons of variable selections. The interesting results have been obtained. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
In this paper, the comparison of orthogonal descriptors and Leaps-and-Bounds regression analysis is performed. The results obtained by using orthogonal descriptors are better than that obtained by using Leaps-and-Bounds regression for the data set of nitrobenzenes used in this study. Leaps-and-Bounds regression can be used effectively for selection of variables in quantitative structure-activity/property relationship(QSAR/QSPR) studies. Consequently, orthogonalisation of descriptors is also a good method for variable selection for studies on QSAR/QSPR.
Resumo:
The phylogenetic relationships and species identification of pufferfishes of the genus Takifugu were examined by use of randomly amplified polymorphic DNA (RAPD) and sequencing of the amplified partial mitochondrial 16S ribosomal RNA genes. Amplifications with 200 ten-base primers under predetermined optimal reaction conditions yielded 1962 reproducible amplified fragments ranging from 200 to 3000 bp. Genetic distances between 5 species of Takifugu and Lagocephalus spadiceus as the outgroup were calculated from the presence or absence of the amplified fragments. Approximately 572 bp of the 16S ribosonial RNA gene was amplified, using universal primers, and used to determine the genetic distance values. Topological phylogenic trees for the 5 species of Takifugu and outgroup were generated from neighbor-joining analysis based on the data set of RAPD analysis and sequences of mitochondrial 16S rDNA. The genetic distance between Takifugu rubripes and Takifugu pseudommus was almost the same as that between individuals within cacti species, but much smaller than that between T. rubripes, T. pseudommus, and the other species. The molecular data gathered from both analysis of mitochondria and nuclear DNA strongly indicated that T. rubripes and T. pseudommus should be regarded as the same species. A fragment of approximately 900 bp was amplified from the genome of all 26 T. pseudommus individuals examined and 4 individuals of intermediate varieties between T. rubripes and T. pseudommus. Of the 32 T. rubripes individuals, only 3 had the amplified fragment. These results suggest that this fragment may be useful in distinguishing between T. rubripes and T. pseudommus.
Resumo:
An assimilation data set based on the GFDL MOM3 model and the NODC XBT data set is used to examine the circulation in the western tropical Pacific and its seasonal variations. The assimilated and observed velocities and transports of the mean circulation agree well. Transports of the North Equatorial Current (NEC), Mindanao Current (MC), North Equatorial Countercurrent (NECC) west of 140degreesE and Kuroshio origin estimated with the assimilation data display the seasonal cycles, roughly strong in boreal spring and weak in autumn, with a little phase difference. The NECC transport also has a semi-annual fluctuation resulting from the phase lag between seasonal cycles of two tropical gyres' recirculations. Strong in summer during the southeast monsoon period, the seasonal cycle of the Indonesian throughflow (ITF) is somewhat different from those of its upstreams, the MC and New Guinea Coastal Current (NGCC), implying the monsoon's impact on it.
Resumo:
In this study we describe the velocity structure and transport of the North Equatorial Current (NEC), the Kuroshio, and the Mindanao Current (MC) using repeated hydrographic sections near the Philippine coast. A most striking feature of the current system in the region is the undercurrent structure below the surface flow. Both the Luzon Undercurrent and the Mindanao Undercurrent appear to be permanent phenomena. The present data set also provides an estimate of the mean circulation diagram (relative to 1500 dbar) that involves a NEC transport of 41 Sverdrups (Sv), a Kuroshio transport of 14 Sv, and a MC transport of 27 Sv, inducing a mass balance better than 1 Sv within the region enclosed by stations. The circulation diagram is insensitive to vertical displacements of the reference level within the depth range between 1500 and 2500 dbar. Transport fluctuations are, in general, consistent with earlier observations; that is, the NEC and the Kuroshio vary in the same phase with a seasonal signal superimposed with interannual variations, and the transport of the MC is dominated by a quasi-biennial oscillation. Dynamic height distributions are also examined to explore the dynamics of the current system.
Resumo:
A major problem which is envisaged in the course of man-made climate change is sea-level rise. The global aspect of the thermal expansion of the sea water likely is reasonably well simulated by present day climate models; the variation of sea level, due to variations of the regional atmospheric forcing and of the large-scale oceanic circulation, is not adequately simulated by a global climate model because of insufficient spatial resolution. A method to infer the coastal aspects of sea level change is to use a statistical ''downscaling'' strategy: a linear statistical model is built upon a multi-year data set of local sea level data and of large-scale oceanic and/or atmospheric data such as sea-surface temperature or sea-level air-pressure. We apply this idea to sea level along the Japanese coast. The sea level is related to regional and North Pacific sea-surface temperature and sea-level air pressure. Two relevant processes are identified. One process is the local wind set-up of water due to regional low-frequency wind anomalies; the other is a planetary scale atmosphere-ocean interaction which takes place in the eastern North Pacific.
Resumo:
The inventories of nutrients in the surface water and large phytoplankton( > 69 pm) were analyzed from the data set of JERS ecological database about a typical coastal waters, the Jiaozhou Bay, China, from 1960s for N, P and from 1980s; for Si. By examining long-term changes of nutrient concentration, calculating stoichiometric balance, and comparing diatom composition, Si limitation of diatom production was found to be more possible. The possibility of Si limitation was from 37% in 1980s to 50% in 1990s. Jiaozhou Bay ecosystem is becoming serious eutrophication, with notable increase of NO2-N, NO3-N and NH4-N from 0.1417 mumol/L, 0.5414 mumol/L, 1.7222 mumol/L in 1960s to 0.9551 mumol/L, 3.001 mumol/L, 8.0359 mumol/L in late 1990s respectively and prominent decrease of Si from 4.2614 mumol/L in 1980s to 1.5861 mumol/L in late 1990s; the nutrient structure is controlled by nitrogen; the main limiting nutrient is probably silicon; because of the Si limitation the phytoplankton community structure has changed drastically.
Resumo:
The seasonal evolution of dissolved inorganic carbon (DIC) and CO2 air-sea fluxes in the Jiaozhou Bay was investigated by means of a data set from four cruises covering a seasonal cycle during 2003 and 2004. The results revealed that DIC had no obvious seasonal variation, with an average concentration of 2035 mu mol kg(-1) C in surface water. However, the sea surface partial pressure of CO2 changed with the season. pCO(2) was 695 mu atm in July and 317 mu atm in February. Using the gas exchange coefficient calculated with Wanninkhof's model, it was concluded that the Jiaozhou Bay was a source of atmospheric CO, in spring, summer, and autumn, whereas it was a sink in winter. The Jiaozhou Bay released 2.60 x 10(11) mmol C to the atmosphere in spring, 6.18 x 10(11) mmol C in summer, and 3.01 x 10(11) mmol C in autumn, whereas it absorbed 5.32 x 10(10) mmol C from the atmosphere in winter. A total of 1.13 x 10(11) mmol C was released to the atmosphere over one year. The behaviour as a carbon source/sink obviously varied in the different regions of the Jiaozhou Bay. In February, the inner bay was a carbon sink, while the bay mouth and the Outer bay were carbon sources. In June and July, the inner and Outer bay were carbon sources, but the strength was different, increasing from the inner to the outer bay. In November, the inner bay was a carbon source, but the bay Mouth was a carbon sink. The outer bay was a weaker CO2 Source. These changes are controlled by many factors, the most important being temperature and phytoplankton. Water temperature in particular was the main factor controlling the carbon dioxide system and the behaviour of the Jiaozhou Bay as a carbon source/sink. The Jiaozhou Bay is a carbon dioxide source when the water temperature is higher than 6.6 degrees C. Otherwise, it is a carbon sink. Phytoplankton is another controlling factor that may play an important role in behaviour as a carbon source or sink in regions where the source or sink nature is weaker.
Resumo:
Offshore seismic exploration is full of high investment and risk. And there are many problems, such as multiple. The technology of high resolution and high S/N ratio on marine seismic data processing is becoming an important project. In this paper, the technology of multi-scale decomposition on both prestack and poststack seismic data based on wavelet and Hilbert-Huang transform and the theory of phase deconvolution is proposed by analysis of marine seismic exploration, investigation and study of literatures, and integration of current mainstream and emerging technology. Related algorithms are studied. The Pyramid algorithm of decomposition and reconstruction had been given by the Mallat algorithm of discrete wavelet transform In this paper, it is introduced into seismic data processing, the validity is shown by test with field data. The main idea of Hilbert-Huang transform is the empirical mode decomposition with which any complicated data set can be decomposed into a finite and often small number of intrinsic mode functions that admit well-behaved Hilbert transform. After the decomposition, a analytical signal is constructed by Hilbert transform, from which the instantaneous frequency and amplitude can be obtained. And then, Hilbert spectrum. This decomposition method is adaptive and highly efficient. Since the decomposition is based on the local characteristics of the time scale of data, it is applicable to nonlinear and non-stationary processes. The phenomenons of fitting overshoot and undershoot and end swings are analyzed in Hilbert-Huang transform. And these phenomenons are eliminated by effective method which is studied in the paper. The technology of multi-scale decomposition on both prestack and poststack seismic data can realize the amplitude preserved processing, enhance the seismic data resolution greatly, and overcome the problem that different frequency components can not restore amplitude properly uniformly in the conventional method. The method of phase deconvolution, which has overcome the minimum phase limitation in traditional deconvolution, approached the base fact well that the seismic wavelet is phase mixed in practical application. And a more reliable result will be given by this method. In the applied research, the high resolution relative amplitude preserved processing result has been obtained by careful analysis and research with the application of the methods mentioned above in seismic data processing in four different target areas of China Sea. Finally, a set of processing flow and method system was formed in the paper, which has been carried on in the application in the actual production process and has made the good progress and the huge economic benefit.
Resumo:
The Tien Shan is the most prominent intracontinental mountain belt on the earth. The active crustal deformation and earthquake activities provide an excellent place to study the continental geodynamics of intracontinental mountain belt. The studies of deep structures in crust and upper mantle are significantly meaningful for understanding the geological evolution and geodynamics of global intracontinental mountain belts. This dissertation focuses on the deep structures and geodynamics in the crust and upper mantle in the Tien Shan mountain belt. With the arrival time data from permanent and temporal seismic stations located in the western and central Tien Shan, using seismic travel time tomographic method, we inversed the P-wave velocity and Vp/Vs structures in the crust and uppermost mantle, the Pn and Sn velocities and Pn anisotropic structures in the uppermost mantle, and the P-wave velocity structures in the crust and mantle deep to 690km depth beneath the Tien Shan. The tomographic results suggest that the deep structures and geodynamics have significant impacts not only on the deformations and earthquake activities in the crust, but also on the mountain building, collision, and dynamics of the whole Tien Shan mountain belt. With the strongly collision and deformations in the crust, the 3-D P-wave velocity and Vp/Vs ratio structures are highly complex. The Pn and Sn velocities in the uppermost mantle beneath the Tien Shan, specially beneath the central Tien Shan, are significantly lower than the seismic wavespeed beneath geological stable regions. We infer that the hot upper mantle from the small-scale convection could elevate the temperature in the lower crust and uppermost mantle, and partially melt the materials in the lower crust. The observations of low P-wave and S-wave velocities, high Vp/Vs ratios near the Moho and the absences of earthquake activities in the lower crust are consistent with this inference. Based on teleseismic tomography images of the upper mantle beneath the Tien Shan, we infer that the lithosphere beneath the Tarim basin has subducted under the Tien Shan to depths as great as 500 km. The lithosphere beneath the Kazakh shield may have subducted to similar depths in the opposite direction, but the limited resolution of this data set makes this inference less certain. These images support the plate boundary model of converge for the Tien Shan, as the lithospheres to the north and south of the range both appear to behave as plates.
Resumo:
This dissertation presents a series of irregular-grid based numerical technique for modeling seismic wave propagation in heterogeneous media. The study involves the generation of the irregular numerical mesh corresponding to the irregular grid scheme, the discretized version of motion equations under the unstructured mesh, and irregular-grid absorbing boundary conditions. The resulting numerical technique has been used in generating the synthetic data sets on the realistic complex geologic models that can examine the migration schemes. The motion equation discretization and modeling are based on Grid Method. The key idea is to use the integral equilibrium principle to replace the operator at each grid in Finite Difference scheme and variational formulation in Finite Element Method. The irregular grids of complex geologic model is generated by the Paving Method, which allow varying grid spacing according to meshing constraints. The grids have great quality at domain boundaries and contain equal quantities of nodes at interfaces, which avoids the interpolation of parameters and variables. The irregular grid absorbing boundary conditions is developed by extending the Perfectly Matched Layer method to the rotated local coordinates. The splitted PML equations of the first-order system is derived by using integral equilibrium principle. The proposed scheme can build PML boundary of arbitrary geometry in the computational domain, avoiding the special treatment at corners in a standard PML method and saving considerable memory and computation cost. The numerical implementation demonstrates the desired qualities of irregular grid based modeling technique. In particular, (1) smaller memory requirements and computational time are needed by changing the grid spacing according to local velocity; (2) Arbitrary surfaces and interface topographies are described accurately, thus removing the artificial reflection resulting from the stair approximation of the curved or dipping interfaces; (3) computational domain is significantly reduced by flexibly building the curved artificial boundaries using the irregular-grid absorbing boundary conditions. The proposed irregular grid approach is apply to reverse time migration as the extrapolation algorithm. It can discretize the smoothed velocity model by irregular grid of variable scale, which contributes to reduce the computation cost. The topography. It can also handle data set of arbitrary topography and no field correction is needed.
Resumo:
The ionospheric parameter M(3000)F2 (the so-called transmission factor or the propagation factor) is important not only in practical applications such as frequency planning for radio-communication but also in ionospheric modeling. This parameter is strongly anti-correlated with the ionospheric F2-layer peak height hmF2,a parameter often used as a key anchor point in some widely used empirical models of the ionospheric electron density profile (e.g., in IRI and NeQuick models). Since hmF2 is not easy to obtain from measurements and M(3000)F2 can be routinely scaled from ionograms recorded by ionosonde/digisonde stations distributed globally and its data has been accumulated for a long history, usually the value of hmF2 is calculated from M(3000)F2 using the empirical formula connecting them. In practice, CCIR M(3000)F2 model is widely used to obtain M(3000)F2 value. However, recently some authors found that the CCIR M(3000)F2 model has remarkable discrepancies with the measured M(3000)F2, especially in low-latitude and equatorial regions. For this reason, the International Reference Ionosphere (IRI) research community proposes to improve or update the currently used CCIR M(3000)F2 model. Any efforts toward the improvement and updating of the current M(3000)F2 model or newly development of a global hmF2 model are encouraged. In this dissertation, an effort is made to construct the empirical models of M(3000)F2 and hmF2 based on the empirical orthogonal function (EOF) analysis combined with regression analysis method. The main results are as follows: 1. A single station model is constructed using monthly median hourly values of M(3000)F2 data observed at Wuhan Ionospheric Observatory during the years of 1957–1991 and compared with the IRI model. The result shows that EOF method is possible to use only a few orders of EOF components to represent most of the variance of the original data set. It is a powerful method for ionospheric modeling. 2. Using the values of M(3000)F2 observed by ionosondes distributed globally, data at grids uniformly distributed globally were obtained by using the Kriging interpolation method. Then the gridded data were decomposed into EOF components using two different coordinates: (1) geographical longitude and latitude; (2) modified dip (Modip) and local time. Based on the EOF decompositions of the gridded data under these two coordinates systems, two types of the global M(3000)F2 model are constructed. Statistical analysis showed that the two types of the constructed M(3000)F2 model have better agreement with the observational M(3000)F2 than the M(3000)F2 model currently used by IRI. The constructed models can represent the global variations of M(3000)F2 better. 3. The hmF2 data used to construct the hmF2 model were converted from the observed M(3000)F2 based on the empirical formula connecting them. We also constructed two types of the global hmF2 model using the similar method of modeling M(3000)F2. Statistical analysis showed that the prediction of our models is more accurate than the model of IRI. This demonstrated that using EOF analysis method to construct global model of hmF2 directly is feasible. The results in this thesis indicate that the modeling technique based on EOF expansion combined with regression analysis is very promising when used to construct the global models of M(3000)F2 and hmF2. It is worthwhile to investigate further and has the potential to be used to the global modeling of other ionospheric parameters.
Resumo:
In the practical seismic profile multiple reflections tend to impede the task of even the experienced interpreter in deducing information from the reflection data. Surface multiples are usually much stronger, more broadband, and more of a problem than internal multiples because the reflection coefficient at the water surface is much larger than the reflection coefficients found in the subsurface. For this reason most attempts to remove multiples from marine data focus on surface multiples, as will I. A surface-related multiple attenuation method can be formulated as an iterative procedure. In this essay a fully data-driven approach which is called MPI —multiple prediction through inversion (Wang, 2003) is applied to a real marine seismic data example. This is a pretty promising scheme for predicting a relative accurate multiple model by updating the multiple model iteratively, as we usually do in a linearized inverse problem. The prominent characteristic of MPI method lie in that it eliminate the need for an explicit surface operator which means it can model the multiple wavefield without any knowledge of surface and subsurface structures even a source signature. Another key feature of this scheme is that it can predict multiples not only in time but also in phase and in amplitude domain. According to the real data experiments it is shown that this scheme for multiple prediction can be made very efficient if a good initial estimate of the multiple-free data set can be provided in the first iteration. In the other core step which is multiple subtraction we use an expanded multi-channel matching filter to fulfil this aim. Compared to a normal multichannel matching filter where an original seismic trace is matched by a group of multiple-model traces, in EMCM filter a seismic trace is matched by not only a group of the ordinary multiple-model traces but also their adjoints generated mathematically. The adjoints of a multiple-model trace include its first derivative, its Hilbert transform and the derivative of the Hilbert transform. The third chapter of the thesis is the application for the real data using the previous methods we put forward from which we can obviously find the effectivity and prospect of the value in use. For this specific case I have done three group experiments to test the effectiveness of MPI method, compare different subtraction results with fixed filter length but different window length, invest the influence of the initial subtraction result for MPI method. In terms of the real data application, we do fine that the initial demultiple estimate take on a great deal of influence for the MPI method. Then two approaches are introduced to refine the intial demultiple estimate which are first arrival and masking filter respectively. In the last part some conclusions are drawn in terms of the previous results I have got.