960 resultados para equivalent web thickness method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Co-training is a semi-supervised learning method that is designed to take advantage of the redundancy that is present when the object to be identified has multiple descriptions. Co-training is known to work well when the multiple descriptions are conditional independent given the class of the object. The presence of multiple descriptions of objects in the form of text, images, audio and video in multimedia applications appears to provide redundancy in the form that may be suitable for co-training. In this paper, we investigate the suitability of utilizing text and image data from the Web for co-training. We perform measurements to find indications of conditional independence in the texts and images obtained from the Web. Our measurements suggest that conditional independence is likely to be present in the data. Our experiments, within a relevance feedback framework to test whether a method that exploits the conditional independence outperforms methods that do not, also indicate that better performance can indeed be obtained by designing algorithms that exploit this form of the redundancy when it is present.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En esta investigación se exploran distintas herramientas usadas por comunidades virtuales en Internet para la creación, difusión y defensa de sus ideas en contra de comunidades rivales y algunas veces la sociedad. El caso de estudio particular en que se ha centrado este trabajo es el de distintos grupos relacionados con la anorexia nerviosa y sus usos de la imagen de los nombres e imagen de usuario. La metodología de investigación usada es cualitativa y experimental.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En 2009 se presento la estandarización de cultivos de queratinocitos autólogos cultivados en suero autólogo. En este estudio los autores describen la efectividad de estos parches de regeneración de piel, para la cobertura de áreas cruentas con indicación de injerto de piel parcial. El porcentaje de epitelización del área cruenta fue el punto principal. Métodos: 47 pacientes fueron incluidos consecutivamente, equivalentes a 78 áreas cruentas. Las áreas fueron estratificadas según la profundidad: grupo 1:IIA (n=8) grupo 2: IIB (n=39); grupo 3,III (n=24) y grupo 4, etiología diferente: Otras (n=7). Todas las áreas fueron tratadas con injertos de queratinocitos autólogos cultivados en suero autólogo y se realizo registro fotográfico y del porcentaje de epitelización al día 5, 7, 15 y 30. Resultados: La efectividad de los injertos de queratinocitos autólogos es de 53.16% ± 46.46%. El porcentaje de epitelización es mayor para el grupo 1 (100%) y grupo 2 (62.79%) que para el grupo 3 (27.57%) y el grupo 4 (33.86%). Se encontró relación entre la interacción de las medianas del porcentaje de epitelización entre área corporal y grado de quemadura (p<0.001 KW) siendo mayor para el grupo 1 en todas las áreas, grupo 2 en cara, grupo 3 en tronco y grupo 4 en cara; y el menor porcentaje de epitelización en el grupo 3 y grupo 4 de las áreas ubicadas tronco. Conclusión: Los injertos de queratinocitos autólogos cultivados en suero autólogo son un método de cobertura eficaz para áreas cruentas producidas por quemaduras IIA y IIB independientemente del tamaño y la localización , y para las áreas cruentas pequeñas (<9cm2) de etiología diferente o grado III de profundidad. Palabras Clave: Cultivo queratinocitos, cobertura áreas cruentas, efectividad.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La majoria de les fallades en elements estructurals són degudes a càrrega per fatiga. En conseqüència, la fatiga mecànica és un factor clau per al disseny d'elements mecànics. En el cas de materials compòsits laminats, el procés de fallada per fatiga inclou diferents mecanismes de dany que resulten en la degradació del material. Un dels mecanismes de dany més importants és la delaminació entre capes del laminat. En el cas de components aeronàutics, les plaques de composit estan exposades a impactes i les delaminacions apareixen facilment en un laminat després d'un impacte. Molts components fets de compòsit tenen formes corbes, superposició de capes i capes amb diferents orientacions que fan que la delaminació es propagui en un mode mixt que depen de la grandària de la delaminació. És a dir, les delaminacions generalment es propaguen en mode mixt variable. És per això que és important desenvolupar nous mètodes per caracteritzar el creixement subcrític en mode mixt per fatiga de les delaminacions. El principal objectiu d'aquest treball és la caracterització del creixement en mode mixt variable de les delaminacions en compòsits laminats per efecte de càrregues a fatiga. Amb aquest fi, es proposa un nou model per al creixement per fatiga de la delaminació en mode mixt. Contràriament als models ja existents, el model que es proposa es formula d'acord a la variació no-monotònica dels paràmetres de propagació amb el mode mixt observada en diferents resultats experimentals. A més, es du a terme un anàlisi de l'assaig mixed-mode end load split (MMELS), la característica més important del qual és la variació del mode mixt a mesura que la delaminació creix. Per a aquest anàlisi, es tenen em compte dos mètodes teòrics presents en la literatura. No obstant, les expressions resultants per l'assaig MMELS no són equivalents i les diferències entre els dos mètodes poden ser importants, fins a 50 vegades. Per aquest motiu, en aquest treball es porta a terme un anàlisi alternatiu més acurat del MMELS per tal d'establir una comparació. Aquest anàlisi alternatiu es basa en el mètode dels elements finits i virtual crack closure technique (VCCT). D'aquest anàlisi en resulten importants aspectes a considerar per a la bona caracterització de materials utilitzant l'assaig MMELS. Durant l'estudi s'ha dissenyat i construït un utillatge per l'assaig MMELS. Per a la caracterització experimental de la propagació per fatiga de delaminacions en mode mixt variable s'utilitzen diferents provetes de laminats carboni/epoxy essencialment unidireccionals. També es du a terme un anàlisi fractogràfic d'algunes de les superfícies de fractura per delaminació. Els resultats experimentals són comparats amb les prediccions del model proposat per la propagació per fatiga d'esquerdes interlaminars.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work on Social Memory, focused on the biographic method and the paths of immaterial Heritage, are the fabric that we have chosen to substantiate the idea of museum. The social dimensions of memory, its construction and representation, are the thickness of the exhibition fabric. The specificity of museological work in contemporary times resembles a fine lace, a meticulous weaving of threads that flow from time, admirable lace, painstaking and complex, created with many needles, made up of hollow spots and stitches (of memories and things forgotten). Repetitions and symmetries are the pace that perpetuates it, the rhythmic grammar that gives it body. A fluid body, a single piece, circumstantial. It is always possible to create new patterns, new compositions, with the same threads. Accurately made, properly made, this lace of memories and things forgotten is always an extraordinary creation, a web of wonder that expands fantasy, generates value and feeds the endless reserve of the community’s knowledge, values and beliefs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The time-of-detection method for aural avian point counts is a new method of estimating abundance, allowing for uncertain probability of detection. The method has been specifically designed to allow for variation in singing rates of birds. It involves dividing the time interval of the point count into several subintervals and recording the detection history of the subintervals when each bird sings. The method can be viewed as generating data equivalent to closed capture–recapture information. The method is different from the distance and multiple-observer methods in that it is not required that all the birds sing during the point count. As this method is new and there is some concern as to how well individual birds can be followed, we carried out a field test of the method using simulated known populations of singing birds, using a laptop computer to send signals to audio stations distributed around a point. The system mimics actual aural avian point counts, but also allows us to know the size and spatial distribution of the populations we are sampling. Fifty 8-min point counts (broken into four 2-min intervals) using eight species of birds were simulated. Singing rate of an individual bird of a species was simulated following a Markovian process (singing bouts followed by periods of silence), which we felt was more realistic than a truly random process. The main emphasis of our paper is to compare results from species singing at (high and low) homogenous rates per interval with those singing at (high and low) heterogeneous rates. Population size was estimated accurately for the species simulated, with a high homogeneous probability of singing. Populations of simulated species with lower but homogeneous singing probabilities were somewhat underestimated. Populations of species simulated with heterogeneous singing probabilities were substantially underestimated. Underestimation was caused by both the very low detection probabilities of all distant individuals and by individuals with low singing rates also having very low detection probabilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The amateur birding community has a long and proud tradition of contributing to bird surveys and bird atlases. Coordinated activities such as Breeding Bird Atlases and the Christmas Bird Count are examples of "citizen science" projects. With the advent of technology, Web 2.0 sites such as eBird have been developed to facilitate online sharing of data and thus increase the potential for real-time monitoring. However, as recently articulated in an editorial in this journal and elsewhere, monitoring is best served when based on a priori hypotheses. Harnessing citizen scientists to collect data following a hypothetico-deductive approach carries challenges. Moreover, the use of citizen science in scientific and monitoring studies has raised issues of data accuracy and quality. These issues are compounded when data collection moves into the Web 2.0 world. An examination of the literature from social geography on the concept of "citizen sensors" and volunteered geographic information (VGI) yields thoughtful reflections on the challenges of data quality/data accuracy when applying information from citizen sensors to research and management questions. VGI has been harnessed in a number of contexts, including for environmental and ecological monitoring activities. Here, I argue that conceptualizing a monitoring project as an experiment following the scientific method can further contribute to the use of VGI. I show how principles of experimental design can be applied to monitoring projects to better control for data quality of VGI. This includes suggestions for how citizen sensors can be harnessed to address issues of experimental controls and how to design monitoring projects to increase randomization and replication of sampled data, hence increasing scientific reliability and statistical power.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we consider the problem of time-harmonic acoustic scattering in two dimensions by convex polygons. Standard boundary or finite element methods for acoustic scattering problems have a computational cost that grows at least linearly as a function of the frequency of the incident wave. Here we present a novel Galerkin boundary element method, which uses an approximation space consisting of the products of plane waves with piecewise polynomials supported on a graded mesh, with smaller elements closer to the corners of the polygon. We prove that the best approximation from the approximation space requires a number of degrees of freedom to achieve a prescribed level of accuracy that grows only logarithmically as a function of the frequency. Numerical results demonstrate the same logarithmic dependence on the frequency for the Galerkin method solution. Our boundary element method is a discretization of a well-known second kind combined-layer-potential integral equation. We provide a proof that this equation and its adjoint are well-posed and equivalent to the boundary value problem in a Sobolev space setting for general Lipschitz domains.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The storage and processing capacity realised by computing has lead to an explosion of data retention. We now reach the point of information overload and must begin to use computers to process more complex information. In particular, the proposition of the Semantic Web has given structure to this problem, but has yet realised practically. The largest of its problems is that of ontology construction; without a suitable automatic method most will have to be encoded by hand. In this paper we discus the current methods for semi and fully automatic construction and their current shortcomings. In particular we pay attention the application of ontologies to products and the particle application of the ontologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Radiation schemes in general circulation models currently make a number of simplifications when accounting for clouds, one of the most important being the removal of horizontal inhomogeneity. A new scheme is presented that attempts to account for the neglected inhomogeneity by using two regions of cloud in each vertical level of the model as opposed to one. One of these regions is used to represent the optically thinner cloud in the level, and the other represents the optically thicker cloud. So, along with the clear-sky region, the scheme has three regions in each model level and is referred to as “Tripleclouds.” In addition, the scheme has the capability to represent arbitrary vertical overlap between the three regions in pairs of adjacent levels. This scheme is implemented in the Edwards–Slingo radiation code and tested on 250 h of data from 12 different days. The data are derived from cloud retrievals using radar, lidar, and a microwave radiometer at Chilbolton, southern United Kingdom. When the data are grouped into periods equivalent in size to general circulation model grid boxes, the shortwave plane-parallel albedo bias is found to be 8%, while the corresponding bias is found to be less than 1% using Tripleclouds. Similar results are found for the longwave biases. Tripleclouds is then compared to a more conventional method of accounting for inhomogeneity that multiplies optical depths by a constant scaling factor, and Tripleclouds is seen to improve on this method both in terms of top-of-atmosphere radiative flux biases and internal heating rates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background/aims: Scant consideration has been given to the variation in structure of the human amniotic membrane (AM) at source or to the significance such differences might have on its clinical transparency. Therefore, we applied our experience of quantifying corneal transparency to AM. Methods: Following elective caesarean, AM from areas of the fetal sac distal and proximal (ie, adjacent) to the placenta was compared with freeze-dried AM. The transmission of light through the AM samples was quantified spectrophotometrically; also, tissue thickness was measured by light microscopy and refractive index by refractometry. Results: Freeze-dried and freeze-thawed AM samples distal and proximal to the placenta differed significantly in thickness, percentage transmission of visible light and refractive index. The thinnest tissue (freeze-dried AM) had the highest transmission spectra. The thickest tissue (freeze-thawed AM proximal to the placenta) had the highest refractive index. Using the direct summation of fields method to predict transparency from an equivalent thickness of corneal tissue, AM was found to be up to 85% as transparent as human cornea. Conclusion: When preparing AM for ocular surface reconstruction within the visual field, consideration should be given to its original location from within the fetal sac and its method of preservation, as either can influence corneal transparency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this work is the numerical realization of the probe method suggested by Ikehata for the detection of an obstacle D in inverse scattering. The main idea of the method is to use probes in the form of point source (., z) with source point z to define an indicator function (I) over cap (z) which can be reconstructed from Cauchy data or far. eld data. The indicator function boolean AND (I) over cap (z) can be shown to blow off when the source point z tends to the boundary aD, and this behavior can be used to find D. To study the feasibility of the probe method we will use two equivalent formulations of the indicator function. We will carry out the numerical realization of the functional and show reconstructions of a sound-soft obstacle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The IntFOLD-TS method was developed according to the guiding principle that the model quality assessment would be the most critical stage for our template based modelling pipeline. Thus, the IntFOLD-TS method firstly generates numerous alternative models, using in-house versions of several different sequence-structure alignment methods, which are then ranked in terms of global quality using our top performing quality assessment method – ModFOLDclust2. In addition to the predicted global quality scores, the predictions of local errors are also provided in the resulting coordinate files, using scores that represent the predicted deviation of each residue in the model from the equivalent residue in the native structure. The IntFOLD-TS method was found to generate high quality 3D models for many of the CASP9 targets, whilst also providing highly accurate predictions of their per-residue errors. This important information may help to make the 3D models that are produced by the IntFOLD-TS method more useful for guiding future experimental work

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work proposes a method to objectively determine the most suitable analogue redesign method for forward type converters under digital voltage mode control. Particular emphasis is placed on determining the method which allows the highest phase margin at the particular switching and crossover frequencies chosen by the designer. It is shown that at high crossover frequencies with respect to switching frequency, controllers designed using backward integration have the largest phase margin; whereas at low crossover frequencies with respect to switching frequency, controllers designed using bilinear integration have the largest phase margins. An accurate model of the power stage is used for simulation, and experimental results from a Buck converter are collected. The performance of the digital controllers is compared to that of the equivalent analogue controller both in simulation and experiment. Excellent correlation between the simulation and experimental results is presented. This work will allow designers to confidently choose the analogue redesign method which yields the greater phase margin for their application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a hierarchical clustering method for semantic Web service discovery. This method aims to improve the accuracy and efficiency of the traditional service discovery using vector space model. The Web service is converted into a standard vector format through the Web service description document. With the help of WordNet, a semantic analysis is conducted to reduce the dimension of the term vector and to make semantic expansion to meet the user’s service request. The process and algorithm of hierarchical clustering based semantic Web service discovery is discussed. Validation is carried out on the dataset.