942 resultados para Hilbert schemes of points Poincaré polynomial Betti numbers Goettsche formula
Resumo:
Se trata de una revisión de las publicaciones sobre Educación Inclusiva que han aflorado en los últimos años. Se comentan un total de veinte trabajos, principalmente: libros, artículos y capítulos de libros, que alcanzan un a variedad de puntos de vista, perspectivas y aportaciones para esta nueva educación. Entre las recensiones podemos encontrar cinco tipos de aproximaciones a la inclusión educativa: 1õ. Aquellas que conceptualmente tratan de describir, contextualizar e identificar y valorar este modelo de educación; 2õ. Ofrecen una perspectiva internacional comparando el estado de la educación inclusiva en diferentes países, la mayoría de estas, se basan en países anglosajones; 3õ. Desde una perspectiva organizativa, así los procesos intrusivos se desarrollan con la organización de las escuelas, con su estructura escolar; 4õ. Análisis y evaluación de propuestas formativas, de desarrollo profesional, capaces de ayudar al desarrollo de una Educación Intrusiva; 5õ. Investigación en y sobre educación inclusiva. Analizan y plantean la idoneidad de esa educación o en su caso el desajuste de determinadas metodologías.
Resumo:
Creativitat i subversió en les reescriptures de Joan Sales se centra en la figura de l’editor i novel•lista Joan Sales (1912-1983) i pretén ressituar l’autor d’Incerta glòria dins del panorama literari català a partir d’una diversificació dels punts de mira des dels quals esdevé possible realitzar-ne un estudi. Com a base, s’han utilitzat les teories traductològiques de finals del segle XX, elaborades per autors com André Lefevere i Susan Bassnett, que situen la traducció, l’edició, l’adaptació, la crítica literària i la historiografia dins del terreny de la reescriptura creativa i atorguen un poder subversiu a totes aquestes activitats. Així, doncs, s’ha intentat modificar la tendència que, històricament, havia dut a considerar de manera negativa les reescriptures de Sales. Sota el paraigua teòric de la reescriptura, les manipulacions, els canvis i les intervencions esdevenen una eina que contribueix a l’evolució literària d’una cultura.
Resumo:
The human visual ability to perceive depth looks like a puzzle. We perceive three-dimensional spatial information quickly and efficiently by using the binocular stereopsis of our eyes and, what is mote important the learning of the most common objects which we achieved through living. Nowadays, modelling the behaviour of our brain is a fiction, that is why the huge problem of 3D perception and further, interpretation is split into a sequence of easier problems. A lot of research is involved in robot vision in order to obtain 3D information of the surrounded scene. Most of this research is based on modelling the stereopsis of humans by using two cameras as if they were two eyes. This method is known as stereo vision and has been widely studied in the past and is being studied at present, and a lot of work will be surely done in the future. This fact allows us to affirm that this topic is one of the most interesting ones in computer vision. The stereo vision principle is based on obtaining the three dimensional position of an object point from the position of its projective points in both camera image planes. However, before inferring 3D information, the mathematical models of both cameras have to be known. This step is known as camera calibration and is broadly describes in the thesis. Perhaps the most important problem in stereo vision is the determination of the pair of homologue points in the two images, known as the correspondence problem, and it is also one of the most difficult problems to be solved which is currently investigated by a lot of researchers. The epipolar geometry allows us to reduce the correspondence problem. An approach to the epipolar geometry is describes in the thesis. Nevertheless, it does not solve it at all as a lot of considerations have to be taken into account. As an example we have to consider points without correspondence due to a surface occlusion or simply due to a projection out of the camera scope. The interest of the thesis is focused on structured light which has been considered as one of the most frequently used techniques in order to reduce the problems related lo stereo vision. Structured light is based on the relationship between a projected light pattern its projection and an image sensor. The deformations between the pattern projected into the scene and the one captured by the camera, permits to obtain three dimensional information of the illuminated scene. This technique has been widely used in such applications as: 3D object reconstruction, robot navigation, quality control, and so on. Although the projection of regular patterns solve the problem of points without match, it does not solve the problem of multiple matching, which leads us to use hard computing algorithms in order to search the correct matches. In recent years, another structured light technique has increased in importance. This technique is based on the codification of the light projected on the scene in order to be used as a tool to obtain an unique match. Each token of light is imaged by the camera, we have to read the label (decode the pattern) in order to solve the correspondence problem. The advantages and disadvantages of stereo vision against structured light and a survey on coded structured light are related and discussed. The work carried out in the frame of this thesis has permitted to present a new coded structured light pattern which solves the correspondence problem uniquely and robust. Unique, as each token of light is coded by a different word which removes the problem of multiple matching. Robust, since the pattern has been coded using the position of each token of light with respect to both co-ordinate axis. Algorithms and experimental results are included in the thesis. The reader can see examples 3D measurement of static objects, and the more complicated measurement of moving objects. The technique can be used in both cases as the pattern is coded by a single projection shot. Then it can be used in several applications of robot vision. Our interest is focused on the mathematical study of the camera and pattern projector models. We are also interested in how these models can be obtained by calibration, and how they can be used to obtained three dimensional information from two correspondence points. Furthermore, we have studied structured light and coded structured light, and we have presented a new coded structured light pattern. However, in this thesis we started from the assumption that the correspondence points could be well-segmented from the captured image. Computer vision constitutes a huge problem and a lot of work is being done at all levels of human vision modelling, starting from a)image acquisition; b) further image enhancement, filtering and processing, c) image segmentation which involves thresholding, thinning, contour detection, texture and colour analysis, and so on. The interest of this thesis starts in the next step, usually known as depth perception or 3D measurement.
Resumo:
El treball desenvolupat en aquesta tesi aprofundeix i aporta solucions innovadores en el camp orientat a tractar el problema de la correspondència en imatges subaquàtiques. En aquests entorns, el que realment complica les tasques de processat és la falta de contorns ben definits per culpa d'imatges esborronades; un fet aquest que es deu fonamentalment a il·luminació deficient o a la manca d'uniformitat dels sistemes d'il·luminació artificials. Els objectius aconseguits en aquesta tesi es poden remarcar en dues grans direccions. Per millorar l'algorisme d'estimació de moviment es va proposar un nou mètode que introdueix paràmetres de textura per rebutjar falses correspondències entre parells d'imatges. Un seguit d'assaigs efectuats en imatges submarines reals han estat portats a terme per seleccionar les estratègies més adients. Amb la finalitat d'aconseguir resultats en temps real, es proposa una innovadora arquitectura VLSI per la implementació d'algunes parts de l'algorisme d'estimació de moviment amb alt cost computacional.
Resumo:
Aquesta tesi estudia algunes de les transformacions agràries enregistrades en un àmbit comarcal (la comarca catalana del Baix Empordà) entre mitjan segle XIX i mitjan segle XX. EI fil conductor és la distribució de la propietat del sol agrícola. Però per a la seva comprensió es considera necessari integrar moltes altres variables. EI treball també es proposa assajar alguns procediments metodològics poc habituals en l'anàlisi de la distribució de la propietat del sòl agrícola i la seva evolució en època contemporània. Com a hipòtesi central, es sosté que, al Baix Empordà i al llarg del període comprès entre 1850 i 1940, els canvis que varen produir-se en l'estructura de la propietat i, també, en I'estructura social rural, varen apuntar genèricament a favor dels grups pagesos. En particular, es sosté : ( I) Que la situació de partida (de mitjan segle XIX) ja es caracteritzava per un notable pes de la petita propietat pagesa sobre I'estructura de la propietat agrícola i sobre el conjunt del sistema agrari. (2) Que, amb posterioritat a la crisi agrària finisecular, els problemes de rendibilitat de la producció agrària i l'erosió soferta per alguns mecanismes d'extracció de renda varen tendir a allunyar els sectors rendistes que tradicionalment havien exercit la seva hegemonia -econòmica i social- en la societat rural. (3) I, finalment, que al llarg del període va produir-se un avenç de la propietat pagesa com a conseqüència del fet que una porció significativa de famílies pageses aconseguissin ampliar el seu patrimoni territorial a través de compres realitzades en el mercat de terres, alhora que un nombre significatiu de vells grans patrimonis es fraccionava i desfeia. La magnitud d'aquests canvis va ser moderada i no va pas estar exempta d'ambigüitats, però posa de relleu la capacitat de resistència i adequació de l'explotació pagesa a les condicions d'un capitalisme evolvent, malgrat els pronòstics en sentit contrari de molts teòrics. La tesi està articulada en dues parts. En la primera es duu a terme una descripció detallada de les característiques del sistema agrari baixempordanès de mitjan segle XIX amb l'objectiu final de determinar el significat econòmic de les terres posseïdes per cada patrimoni familiar (més enllà de la simple consideració de les superfícies). EI primer pas consisteix en l'anàlisi dels usos del sòl, dels conreus principals i la seva ordenació en rotacions, dels rendiments físics, de les practiques de reposició de la fertilitat i de la dotació ramadera. A continuació es descriuen les tècniques i el procés de treball agrari amb l'objectiu de formular un model d'organització del treball agrícola que permeti mesurar les exigències en treball d'aquesta activitat. Es conclou que, des de la perspectiva de l'ocupació i de la demanda de treball generades pel sistema agrari, les localitats rurals es caracteritzaven per un fort excedent de mà d'obra en relació a les demandes laborals dels conreus tant des d'una perspectiva macroeconòmica com microeconòmica. EI tercer capítol es centra en l'avaluació de les necessitats de consum i reproducció de les UFP. Les estimacions realitzades permeten proposar un model flexible, que és contrastat amb els ingressos potencialment obtenibles per cada patrimoni. S'arriba a la conclusió que només una ínfima part de la població arribava a obtenir, amb l'explotació directa del seu patrimoni, l'ingrés necessari per a la seva reproducció econòmica simple. Paral·lelament però, es posa de relleu la importància econòmica i social dels petits patrimonis pagesos. S'estima que entorn una mitjana del 45% del sòl agrícola estava posseït per aquest segment de propietaris i, en el quart capítol, s'estudien les implicacions d'aquest fet. EI retrat de la situació de partida finalitza amb l'estudi dels règims de no-propietat predominants a la comarca. En la segona part, aquesta visió estàtica deixa pas a una anàlisi dinàmica. A mitjan segle XIX, al Baix Empordà, s'estava arribant a la fi d'una llarga etapa expansiva iniciada una centúria abans. Els primers signes d'esgotament varen ser la intensa pèrdua de població rural entre 1860 i 1880, la paralització de l'expansió dels conreus i el fort desenvolupament de la industria surera, eix del nou motor econòmic comarcal. Amb posterioritat a 1860 els canvis en l'estructura distributiva de la propietat varen tendir a apuntar cap a la consolidació de la propietat pagesa. Es va produir un procés de transferència de terres des dels sectors rendistes cap a sectors pagesos que va realitzar-se a través de compravendes en el mercat de la terra més que a través d'establiments i subestabliments emfitèutics. Va tenir com a conseqüència última el retrocés dels vells patrimonis rendistes, que, en general, no varen ser substituïts per l'aparició de nous grans patrimonis, com havia pogut passar fins aleshores. Paral·lelament, un bon nombre d'unitats familiars rurals també varen anar abandonant el camp i les seves propietats, produint-se una altra línia de transferència de terres entre sectors pagesos. La depreciació sostinguda dels preus agrícoles, la caiguda de la renda agrària, la superior rendibilitat de les inversions en valors mobiliaris i la incidència d'una creixent conflictivitat agrària són els factors que es destaquen per explicar la reculada dels grans patrimonis territorials. Des de la perspectiva pagesa es proposen tres elements explicatius per interpretar el procés d'acumulació patrimonial observat en un determinat segment de població: (1) el manteniment d'estratègies de producció per a l'autoconsum (un aspecte sempre polèmic i de difícil demostració); (2) l'existència d'un flux important d'ingressos salarials i extra-agrícoles en la composició de l'ingrés familiar pagès; i (3) el canvi en les orientacions tècniques i productives de les explotacions pageses. La combinació dels tres, alhora que hauria limitat els efectes directes dels moviments dels preus agraris, hauria possibilitat l'estratègia acumulativa observada.
Resumo:
Artykuł jest próbą odpowiedzi na pytanie o wpływ modelu organizacyjnego Investigative Reporters and Editors na powstające od kilku dekad na całym świecie stowarzyszenia dziennikarzy śledczych. Autor omawia rozwój struktur zrzeszających reporterów dochodzeniowych w różnych państwach, w szczególności przyjętych przez nie form działania oraz celów. Rosnąca liczba oraz globalny charakter tego zjawiska prowadzi do instytucjonalizacji dziennikarstwa śledczego na świecie. Scharakteryzowane zostały w artykule wybrane organizacje, w szczególności te, które zaadaptowały model wypracowany przez IRE. W odniesieniu do pozostałych stowarzyszeń dziennikarskich autor wskazuje na przyczyny odrzucenia formuły wypracowanej przez amerykańskich muckrakerów.
Resumo:
Europe has responded to the crisis with strengthened budgetary and macroeconomic surveillance, the creation of the European Stability Mechanism, liquidity provisioning by resilient economies and the European Central Bank and a process towards a banking union. However, a monetary union requires some form of budget for fiscal stabilisation in case of shocks, and as a backstop to the banking union. This paper compares four quantitatively different schemes of fiscal stabilisation and proposes a new scheme based on GDP-indexed bonds. The options considered are: (i) A federal budget with unemployment and corporate taxes shifted to euro-area level; (ii) a support scheme based on deviations from potential output;(iii) an insurance scheme via which governments would issue bonds indexed to GDP, and (iv) a scheme in which access to jointly guaranteed borrowing is combined with gradual withdrawal of fiscal sovereignty. Our comparison is based on strong assumptions. We carry out a preliminary, limited simulation of how the debt-to-GDP ratio would have developed between 2008-14 under the four schemes for Greece, Ireland, Portugal, Spain and an ‘average’ country.The schemes have varying implications in each case for debt sustainability
Resumo:
This paper describes laboratory observations of inertia–gravity waves emitted from balanced fluid flow. In a rotating two-layer annulus experiment, the wavelength of the inertia–gravity waves is very close to the deformation radius. Their amplitude varies linearly with Rossby number in the range 0.05–0.14, at constant Burger number (or rotational Froude number). This linear scaling challenges the notion, suggested by several dynamical theories, that inertia–gravity waves generated by balanced motion will be exponentially small. It is estimated that the balanced flow leaks roughly 1% of its energy each rotation period into the inertia–gravity waves at the peak of their generation. The findings of this study imply an inevitable emission of inertia–gravity waves at Rossby numbers similar to those of the large-scale atmospheric and oceanic flow. Extrapolation of the results suggests that inertia–gravity waves might make a significant contribution to the energy budgets of the atmosphere and ocean. In particular, emission of inertia–gravity waves from mesoscale eddies may be an important source of energy for deep interior mixing in the ocean.
Resumo:
An algorithm is presented for the generation of molecular models of defective graphene fragments, containing a majority of 6-membered rings with a small number of 5- and 7-membered rings as defects. The structures are generated from an initial random array of points in 2D space, which are then subject to Delaunay triangulation. The dual of the triangulation forms a Voronoi tessellation of polygons with a range of ring sizes. An iterative cycle of refinement, involving deletion and addition of points followed by further triangulation, is performed until the user-defined criteria for the number of defects are met. The array of points and connectivities are then converted to a molecular structure and subject to geometry optimization using a standard molecular modeling package to generate final atomic coordinates. On the basis of molecular mechanics with minimization, this automated method can generate structures, which conform to user-supplied criteria and avoid the potential bias associated with the manual building of structures. One application of the algorithm is the generation of structures for the evaluation of the reactivity of different defect sites. Ab initio electronic structure calculations on a representative structure indicate preferential fluorination close to 5-ring defects.
Resumo:
In this paper we consider the scattering of a plane acoustic or electromagnetic wave by a one-dimensional, periodic rough surface. We restrict the discussion to the case when the boundary is sound soft in the acoustic case, perfectly reflecting with TE polarization in the EM case, so that the total field vanishes on the boundary. We propose a uniquely solvable first kind integral equation formulation of the problem, which amounts to a requirement that the normal derivative of the Green's representation formula for the total field vanish on a horizontal line below the scattering surface. We then discuss the numerical solution by Galerkin's method of this (ill-posed) integral equation. We point out that, with two particular choices of the trial and test spaces, we recover the so-called SC (spectral-coordinate) and SS (spectral-spectral) numerical schemes of DeSanto et al., Waves Random Media, 8, 315-414 1998. We next propose a new Galerkin scheme, a modification of the SS method that we term the SS* method, which is an instance of the well-known dual least squares Galerkin method. We show that the SS* method is always well-defined and is optimally convergent as the size of the approximation space increases. Moreover, we make a connection with the classical least squares method, in which the coefficients in the Rayleigh expansion of the solution are determined by enforcing the boundary condition in a least squares sense, pointing out that the linear system to be solved in the SS* method is identical to that in the least squares method. Using this connection we show that (reflecting the ill-posed nature of the integral equation solved) the condition number of the linear system in the SS* and least squares methods approaches infinity as the approximation space increases in size. We also provide theoretical error bounds on the condition number and on the errors induced in the numerical solution computed as a result of ill-conditioning. Numerical results confirm the convergence of the SS* method and illustrate the ill-conditioning that arises.
Resumo:
Purpose: Acquiring details of kinetic parameters of enzymes is crucial to biochemical understanding, drug development, and clinical diagnosis in ocular diseases. The correct design of an experiment is critical to collecting data suitable for analysis, modelling and deriving the correct information. As classical design methods are not targeted to the more complex kinetics being frequently studied, attention is needed to estimate parameters of such models with low variance. Methods: We have developed Bayesian utility functions to minimise kinetic parameter variance involving differentiation of model expressions and matrix inversion. These have been applied to the simple kinetics of the enzymes in the glyoxalase pathway (of importance in posttranslational modification of proteins in cataract), and the complex kinetics of lens aldehyde dehydrogenase (also of relevance to cataract). Results: Our successful application of Bayesian statistics has allowed us to identify a set of rules for designing optimum kinetic experiments iteratively. Most importantly, the distribution of points in the range is critical; it is not simply a matter of even or multiple increases. At least 60 % must be below the KM (or plural if more than one dissociation constant) and 40% above. This choice halves the variance found using a simple even spread across the range.With both the glyoxalase system and lens aldehyde dehydrogenase we have significantly improved the variance of kinetic parameter estimation while reducing the number and costs of experiments. Conclusions: We have developed an optimal and iterative method for selecting features of design such as substrate range, number of measurements and choice of intermediate points. Our novel approach minimises parameter error and costs, and maximises experimental efficiency. It is applicable to many areas of ocular drug design, including receptor-ligand binding and immunoglobulin binding, and should be an important tool in ocular drug discovery.
Resumo:
The Bahrain International Circuit (BIC) is considered its one of the best international racing car track in terms of technical aspects and architectural quality. Two Formula 1 races have been hosted in the Kingdom of Bahrain, in 2004 and 2005, at BIC. The BIC had recently won the award of the best international racing car circuit. This paper highlights on the elements that contributed to the success of such project starting from the architectural aspects, construction, challenges, tendering process, risk management, the workforce, speed of the construction method, and future prospects for harnessing solar and wind energy for sustainable electrification and production of water for the circuit, i.e. making BIC green and environment-friendly international circuit.
Resumo:
This paper describes a new method for reconstructing 3D surface using a small number, e.g. 10, of 2D photographic images. The images are taken at different viewing directions by a perspective camera with full prior knowledge of the camera configurations. The reconstructed object's surface is represented a set of triangular facets. We empirically demonstrate that if the viewing directions are uniformly distributed around the object's viewing sphere, then the reconstructed 3D points optimally cluster closely on a highly curved part of the surface and are widely, spread on smooth or fat parts. The advantage of this property is that the reconstructed points along a surface or a contour generator are not undersampled or underrepresented because surfaces or contours should be sampled or represented with more densely points where their curvatures are high. The more complex the contour's shape, the greater is the number of points required, but the greater the number of points is automatically generated by the proposed method Given that the viewing directions are uniformly distributed, the number and distribution of the reconstructed points depend on the shape or the curvature of the surface regardless of the size of the surface or the size of the object.
Resumo:
We introduce transreal analysis as a generalisation of real analysis. We find that the generalisation of the real exponential and logarithmic functions is well defined for all transreal numbers. Hence, we derive well defined values of all transreal powers of all non-negative transreal numbers. In particular, we find a well defined value for zero to the power of zero. We also note that the computation of products via the transreal logarithm is identical to the transreal product, as expected. We then generalise all of the common, real, trigonometric functions to transreal functions and show that transreal (sin x)/x is well defined everywhere. This raises the possibility that transreal analysis is total, in other words, that every function and every limit is everywhere well defined. If so, transreal analysis should be an adequate mathematical basis for analysing the perspex machine - a theoretical, super-Turing machine that operates on a total geometry. We go on to dispel all of the standard counter "proofs" that purport to show that division by zero is impossible. This is done simply by carrying the proof through in transreal arithmetic or transreal analysis. We find that either the supposed counter proof has no content or else that it supports the contention that division by zero is possible. The supposed counter proofs rely on extending the standard systems in arbitrary and inconsistent ways and then showing, tautologously, that the chosen extensions are not consistent. This shows only that the chosen extensions are inconsistent and does not bear on the question of whether division by zero is logically possible. By contrast, transreal arithmetic is total and consistent so it defeats any possible "straw man" argument. Finally, we show how to arrange that a function has finite or else unmeasurable (nullity) values, but no infinite values. This arithmetical arrangement might prove useful in mathematical physics because it outlaws naked singularities in all equations.
Resumo:
This paper describes a new method for reconstructing 3D surface points and a wireframe on the surface of a freeform object using a small number, e.g. 10, of 2D photographic images. The images are taken at different viewing directions by a perspective camera with full prior knowledge of the camera configurations. The reconstructed surface points are frontier points and the wireframe is a network of contour generators. Both of them are reconstructed by pairing apparent contours in the 2D images. Unlike previous works, we empirically demonstrate that if the viewing directions are uniformly distributed around the object's viewing sphere, then the reconstructed 3D points automatically cluster closely on a highly curved part of the surface and are widely spread on smooth or flat parts. The advantage of this property is that the reconstructed points along a surface or a contour generator are not under-sampled or under-represented because surfaces or contours should be sampled or represented with more densely points where their curvatures are high. The more complex the contour's shape, the greater is the number of points required, but the greater the number of points is automatically generated by the proposed method. Given that the viewing directions are uniformly distributed, the number and distribution of the reconstructed points depend on the shape or the curvature of the surface regardless of the size of the surface or the size of the object. The unique pattern of the reconstructed points and contours may be used in 31) object recognition and measurement without computationally intensive full surface reconstruction. The results are obtained from both computer-generated and real objects. (C) 2007 Elsevier B.V. All rights reserved.