969 resultados para Vector analysis.


Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are seven strong earthquakes with M >= 6.5 that occurred in southern California during the period from 1980 to 2005. In this paper, these earthquakes were studied by the LURR (Load/Unload Response Ratio) method and the State Vector method to detect if there are anomalies before them. The results show that LURR anomalies appeared before 6 earthquakes out of 7 and State Vector anomalies appeared before all 7 earthquakes. For the LURR method, the interval between maximum LURR value and the forthcoming earthquake is 1 to 19 months, and the dominant mean interval is about 10.7 months. For the State Vector method, the interval between the maximum modulus of increment State Vector and the forthcoming earthquake is from 3 to 27 months, but the dominant mean interval between the occurrence time of the maximum State Vector anomaly and the forthcoming earthquake is about 4.7 months. The results also show that the minimum valid space window scale for the LURR and the State Vector is a circle with a radius of 100 km and a square of 3 degrees 3 degrees, respectively. These results imply that the State Vector method is more effective for short-term earthquake prediction than the LURR method, however the LURR method is more effective for location prediction than the State Vector method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper addresses the problem of privacy-preserving data publishing for social network. Research on protecting the privacy of individuals and the confidentiality of data in social network has recently been receiving increasing attention. Privacy is an important issue when one wants to make use of data that involves individuals' sensitive information, especially in a time when data collection is becoming easier and sophisticated data mining techniques are becoming more efficient. In this paper, we discuss various privacy attack vectors on social networks. We present algorithms that sanitize data to make it safe for release while preserving useful information, and discuss ways of analyzing the sanitized data. This study provides a summary of the current state-of-the-art, based on which we expect to see advances in social networks data publishing for years to come.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Backgrounds Ea aims: The boundaries between the categories of body composition provided by vectorial analysis of bioimpedance are not well defined. In this paper, fuzzy sets theory was used for modeling such uncertainty. Methods: An Italian database with 179 cases 18-70 years was divided randomly into developing (n = 20) and testing samples (n = 159). From the 159 registries of the testing sample, 99 contributed with unequivocal diagnosis. Resistance/height and reactance/height were the input variables in the model. Output variables were the seven categories of body composition of vectorial analysis. For each case the linguistic model estimated the membership degree of each impedance category. To compare such results to the previously established diagnoses Kappa statistics was used. This demanded singling out one among the output set of seven categories of membership degrees. This procedure (defuzzification rule) established that the category with the highest membership degree should be the most likely category for the case. Results: The fuzzy model showed a good fit to the development sample. Excellent agreement was achieved between the defuzzified impedance diagnoses and the clinical diagnoses in the testing sample (Kappa = 0.85, p < 0.001). Conclusions: fuzzy linguistic model was found in good agreement with clinical diagnoses. If the whole model output is considered, information on to which extent each BIVA category is present does better advise clinical practice with an enlarged nosological framework and diverse therapeutic strategies. (C) 2012 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To observe the behavior of the plotted vectors on the RXc (R - resistance - and Xc - reactance corrected for body height/length) graph through bioelectrical impedance analysis (BIVA) and phase angle (PA) values in stable premature infants, considering the hypothesis that preterm infants present vector behavior on BIVA suggestive of less total body water and soft tissues, compared to reference data for term infants. Methods: Cross-sectional study, including preterm neonates of both genders, in-patients admitted to an intermediate care unit at a tertiary care hospital. Data on delivery, diet and bioelectrical impedance (800 mA, 50 kHz) were collected. The graphs and vector analysis were performed with the BIVA software. Results: A total of 108 preterm infants were studied, separated according to age (< 7 days and >= 7 days). Most of the premature babies were without the normal range (above the 95% tolerance intervals) existing in literature for term newborn infants and there was a tendency to dispersion of the points in the upper right quadrant, RXc plan. The PA was 4.92 degrees (+/- 2.18) for newborns < 7 days and 4.34 degrees (+/- 2.37) for newborns >= 7 days. Conclusion: Premature infants behave similarly in terms of BIVA and most of them have less absolute body water, presenting less fat free mass and fat mass in absolute values, compared to term newborn infants.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An unabridged and unaltered republication of the second edition published by Charles Scribner's Sons in 1909.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

A numerical integration procedure for rotational motion using a rotation vector parametrization is explored from an engineering perspective by using rudimentary vector analysis. The incremental rotation vector, angular velocity and acceleration correspond to different tangent spaces of the rotation manifold at different times and have a non-vectorial character. We rewrite the equation of motion in terms of vectors lying in the same tangent space, facilitating vector space operations consistent with the underlying geometric structure. While any integration algorithm (that works within a vector space setting) may be used, we presently employ a family of explicit Runge-Kutta algorithms to solve this equation. While this work is primarily motivated out of a need for highly accurate numerical solutions of dissipative rotational systems of engineering interest, we also compare the numerical performance of the present scheme with some of the invariant preserving schemes, namely ALGO-C1, STW, LIEMIDEA] and SUBCYC-M. Numerical results show better local accuracy via the present approach vis-a-vis the preserving algorithms. It is also noted that the preserving algorithms do not simultaneously preserve all constants of motion. We incorporate adaptive time-stepping within the present scheme and this in turn enables still higher accuracy and a `near preservation' of constants of motion over significantly longer intervals. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Locality to other nodes on a peer-to-peer overlay network can be established by means of a set of landmarks shared among the participating nodes. Each node independently collects a set of latency measures to landmark nodes, which are used as a multi-dimensional feature vector. Each peer node uses the feature vector to generate a unique scalar index which is correlated to its topological locality. A popular dimensionality reduction technique is the space filling Hilbert’s curve, as it possesses good locality preserving properties. However, there exists little comparison between Hilbert’s curve and other techniques for dimensionality reduction. This work carries out a quantitative analysis of their properties. Linear and non-linear techniques for scaling the landmark vectors to a single dimension are investigated. Hilbert’s curve, Sammon’s mapping and Principal Component Analysis have been used to generate a 1d space with locality preserving properties. This work provides empirical evidence to support the use of Hilbert’s curve in the context of locality preservation when generating peer identifiers by means of landmark vector analysis. A comparative analysis is carried out with an artificial 2d network model and with a realistic network topology model with a typical power-law distribution of node connectivity in the Internet. Nearest neighbour analysis confirms Hilbert’s curve to be very effective in both artificial and realistic network topologies. Nevertheless, the results in the realistic network model show that there is scope for improvements and better techniques to preserve locality information are required.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Purpose – Curve fitting from unordered noisy point samples is needed for surface reconstruction in many applications -- In the literature, several approaches have been proposed to solve this problem -- However, previous works lack formal characterization of the curve fitting problem and assessment on the effect of several parameters (i.e. scalars that remain constant in the optimization problem), such as control points number (m), curve degree (b), knot vector composition (U), norm degree (k), and point sample size (r) on the optimized curve reconstruction measured by a penalty function (f) -- The paper aims to discuss these issues -- Design/methodology/approach - A numerical sensitivity analysis of the effect of m, b, k and r on f and a characterization of the fitting procedure from the mathematical viewpoint are performed -- Also, the spectral (frequency) analysis of the derivative of the angle of the fitted curve with respect to u as a means to detect spurious curls and peaks is explored -- Findings - It is more effective to find optimum values for m than k or b in order to obtain good results because the topological faithfulness of the resulting curve strongly depends on m -- Furthermore, when an exaggerate number of control points is used the resulting curve presents spurious curls and peaks -- The authors were able to detect the presence of such spurious features with spectral analysis -- Also, the authors found that the method for curve fitting is robust to significant decimation of the point sample -- Research limitations/implications - The authors have addressed important voids of previous works in this field -- The authors determined, among the curve fitting parameters m, b and k, which of them influenced the most the results and how -- Also, the authors performed a characterization of the curve fitting problem from the optimization perspective -- And finally, the authors devised a method to detect spurious features in the fitting curve -- Practical implications – This paper provides a methodology to select the important tuning parameters in a formal manner -- Originality/value - Up to the best of the knowledge, no previous work has been conducted in the formal mathematical evaluation of the sensitivity of the goodness of the curve fit with respect to different possible tuning parameters (curve degree, number of control points, norm degree, etc.)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Although the first procedure in a seeing human eye using excimer laser was reported in 1988 (McDonald et al. 1989, O'Connor et al. 2006) just three studies (Kymionis et al. 2007, O'Connor et al. 2006, Rajan et al. 2004) with a follow-up over ten years had been published when this thesis was started. The present thesis aims to investigate 1) the long-term outcomes of excimer laser refractive surgery performed for myopia and/or astigmatism by photorefractive keratectomy (PRK) and laser-in situ- keratomileusis (LASIK), 2) the possible differences in postoperative outcomes and complications when moderate-to-high astigmatism is treated with PRK or LASIK, 3) the presence of irregular astigmatism that depend exclusively on the corneal epithelium, and 4) the role of corneal nerve recovery in corneal wound healing in PRK enhancement. Our results revealed that in long-term the number of eyes that achieved uncorrected visual acuity (UCVA)≤0.0 and ≤0.5 (logMAR) was higher after PRK than after LASIK. Postoperative stability was slightly better after PRK than after LASIK. In LASIK treated eyes the incidence of myopic regression was more pronounced when the intended correction was over >6.0 D and in patients aged <30 years.Yet the intended corrections in our study were higher for LASIK than for PRK eyes. No differences were found in percentages of eyes with best corrected visual acuity (BCVA) or loss of two or more lines of visual acuity between PRK and LASIK in the long-term. The postoperative long-term outcomes of PRK with two different delivery systems broad beam and scanning laser were compared and revealed no differences. Postoperative outcomes of moderate-to-high astigmatism yielded better results in terms of UCVA and less compromise or loss of two more lines of BCVA after LASIK that after PRK.Similar stability for both procedures was revealed. Vector analysis showed that LASIK outcomes tended to be more accurate than PRK outcomes, yet no statistically differences were found. Irregular astigmatism secondary to recurrent corneal erosion due to map-dot-fingerprint was successfully treated with phototherapeutic keratectomy (PTK). Preoperative videokeratographies (VK) showed irregular astigmatism. However, postoperatively, all eyes showed a regular pattern. No correlation was found between pre- and postoperative VK patterns. Postoperative outcomes of late PRK in eyes originally subjected to LASIK showed that all (7/7) eyes achieved UCVA ≤0.5 at last follow-up (range 3 — 11 months), and no eye lost lines of BCVA. Postoperatively all eyes developed and initial mild haze (0.5 — 1) into the first month. Yet, at last follow-up 5/7 eyes showed a haze of 0.5 and this was no longer evident in 2/7 eyes. Based on these results, we demonstrated that the long-term outcomes after PRK and LASIK were safe and efficient, with similar stability for both procedures. The PRK outcomes were similar when treated by broad-beam or scanning slit laser. LASIK was better than PRK to correct moderate-to-high astigmatism, yet both procedures showed a tendency of undercorrection. Irregular astigmatism was proven to be able to depend exclusively from the corneal epithelium. If this kind of astigmatism is present in the cornea and a customized PRK/LASIK correction is done based on wavefront measurements an irregular astigmatism may be produced rather than treated. Corneal sensory nerve recovery should have an important role in the modulation of the corneal wound healing and post-operative anterior stromal scarring. PRK enhancement may be an option in eyes with previous LASIK after a sufficient time interval that in at least 2 years.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Image fusion techniques are useful to integrate the geometric detail of a high-resolution panchromatic (PAN) image and the spectral information of a low-resolution multispectral (MSS) image, particularly important for understanding land use dynamics at larger scale (1:25000 or lower), which is required by the decision makers to adopt holistic approaches for regional planning. Fused images can extract features from source images and provide more information than one scene of MSS image. High spectral resolution aids in identification of objects more distinctly while high spatial resolution allows locating the objects more clearly. The geoinformatics technologies with an ability to provide high-spatial-spectral-resolution data helps in inventorying, mapping, monitoring and sustainable management of natural resources. Fusion module in GRDSS, taking into consideration the limitations in spatial resolution of MSS data and spectral resolution of PAN data, provide high-spatial-spectral-resolution remote sensing images required for land use mapping on regional scale. GRDSS is a freeware GIS Graphic User Interface (GUI) developed in Tcl/Tk is based on command line arguments of GRASS (Geographic Resources Analysis Support System) with the functionalities for raster analysis, vector analysis, site analysis, image processing, modeling and graphics visualization. It has the capabilities to capture, store, process, analyse, prioritize and display spatial and temporal data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fully structured and matured open source spatial and temporal analysis technology seems to be the official carrier of the future for planning of the natural resources especially in the developing nations. This technology has gained enormous momentum because of technical superiority, affordability and ability to join expertise from all sections of the society. Sustainable development of a region depends on the integrated planning approaches adopted in decision making which requires timely and accurate spatial data. With the increased developmental programmes, the need for appropriate decision support system has increased in order to analyse and visualise the decisions associated with spatial and temporal aspects of natural resources. In this regard Geographic Information System (GIS) along with remote sensing data support the applications that involve spatial and temporal analysis on digital thematic maps and the remotely sensed images. Open source GIS would help in wide scale applications involving decisions at various hierarchical levels (for example from village panchayat to planning commission) on economic viability, social acceptance apart from technical feasibility. GRASS (Geographic Resources Analysis Support System, http://wgbis.ces.iisc.ernet.in/grass) is an open source GIS that works on Linux platform (freeware), but most of the applications are in command line argument, necessitating a user friendly and cost effective graphical user interface (GUI). Keeping these aspects in mind, Geographic Resources Decision Support System (GRDSS) has been developed with functionality such as raster, topological vector, image processing, statistical analysis, geographical analysis, graphics production, etc. This operates through a GUI developed in Tcltk (Tool command language / Tool kit) under Linux as well as with a shell in X-Windows. GRDSS include options such as Import /Export of different data formats, Display, Digital Image processing, Map editing, Raster Analysis, Vector Analysis, Point Analysis, Spatial Query, which are required for regional planning such as watershed Analysis, Landscape Analysis etc. This is customised to Indian context with an option to extract individual band from the IRS (Indian Remote Sensing Satellites) data, which is in BIL (Band Interleaved by Lines) format. The integration of PostgreSQL (a freeware) in GRDSS aids as an efficient database management system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Manguezal é um ecossistema costeiro que ocorre nas regiões tropicais e subtropicais do planeta, ocupando a zona entremarés dos oceanos, e sendo caracterizado pela presença de vegetação arbórea adaptada à condições adversas de salinidade, substrato, baixa oxigenação e submersão periódica. A pressão sobre os manguezais do Estado do Rio de Janeiro vem se intensificando nas últimas décadas, e estão associadas a vetores de pressão como os aterros, desmatamentos, queimadas, corte seletivo de madeira, captura predatória de moluscos e crustáceos, lançamento de efluentes de origens diversas, a superexplotação dos recursos pesqueiros e a utilização de técnicas e apetrechos inadequados. Considerando a inexistência de mapeamento integrado e atualizado dos remanescentes de manguezal, indicando sua localização e dimensionamento, o presente estudo veio suprir essa demanda, construindo uma ferramenta consistente para a análise dos principais vetores a que estão expostos, subsidiando a proposição de ações para a conservação e monitoramento desse ecossistema. Essas ações consideram a necessidade de preservação da biodiversidade, da manutenção da atividade pesqueira, da estabilidade da linha de costa, e da subsistência de diversas populações que habitam a região costeira. O mapeamento dos manguezais do Estado do Rio de Janeiro foi elaborado a partir da interpretação visual de ortofotografias coloridas do ano de 2005, na escala 1:10.000, tendo sido realizadas checagens de campo para identificação da verdade terrestre. Os remanescentes mapeados totalizam uma área de aproximadamente 17.720 ha, estando distribuídos por sete regiões hidrográficas localizadas na zona costeira fluminense. Esses ocorrem com mais freqüência, e com maiores dimensões, nas regiões da baía da Ilha Grande, Guandu(Sepetiba) e baía de Guanabara. O estudo contemplou ainda o levantamento e sistematização de dados cartográficos e de sensoriamento remoto, e a identificação e análise dos principais vetores de pressão que atuam sobre esses, a partir da adaptação da metodologia da Análise de Cadeia Causal. Nessa análise foram identificados como principais problemas ambientais dos manguezais fluminenses, a Modificação de habitats e comunidades, a Poluição, e a Exploração não sustentável dos recursos pesqueiros, todos associados aos diferentes vetores de pressão já relacionados. Por fim, foram apresentadas propostas de ações para subsidiar a implementação da Política Estadual para a Conservação dos Manguezais do Estado do Rio de Janeiro, contemplando os níveis operacional, de planejamento, e político. A reativação do Grupo Técnico Permanente sobre Manguezais é de vital importância para a retomada dessas discussões e para a implementação de ações, apoiado na ampliação dos conhecimentos sobre esse rico ecossistema e, integrando e fortalecendo a atuação dos diversos atores envolvidos, buscando assim garantir a integridade dos manguezais fluminenses