942 resultados para Hilbert schemes of points Poincaré polynomial Betti numbers Goettsche formula
Resumo:
The prominent position given to academic writing across contemporary academia is reflected in the substantive literature and debate devoted to the subject over the past 30 years. However, the massification of higher education, manifested by a shift from elite to mass education, has brought the issue into the public arena, with much debate focusing on the need for ‘modern-day' students to be taught how to write academically (Bjork et al., 2003; Ganobcsik-Williams, 2006). Indeed, Russell (2003) argued that academic writing has become a global ‘problem' in Higher Education because it sits between two contradictory pressures (p.V). On one end of the university ‘experience' increasing numbers of students, many from non-traditional backgrounds, enter higher education bringing with them a range of communication abilities. At the other end, many graduates leave university to work in specialised industries where employers expect them to have high level writing skills (Ashton, 2007; Russell, 2003; Torrence et al., 1999). By drawing attention to the issues around peer mentoring within an academic writing setting in three different higher education Institutions, this paper makes an important contribution to current debates. Based upon a critical analysis of the emergent findings of an empirical study into the role of peer writing mentors in promoting student transition to higher education, the paper adopts an academic literacies approach to discuss the role of writing mentoring in promoting transition and retention by developing students' academic writing. Attention is drawn to the manner in which student expectations of writing mentoring actually align with mentoring practices - particularly in terms of the writing process and critical thinking. Other issues such as the approachability of writing mentors, the practicalities of accessing writing mentoring and the wider learning environment are also discussed.
Resumo:
In this paper, we report on the strain and pressure testing of highly flexible skins embedded with Bragg grating sensors recorded in either silica or polymer optical fibre. The photonic skins, with a size of 10cm x 10cm and thickness of 1mm, were fabricated by embedding the polymer fibre or silica fibre containing Bragg gratings in Sylgard 184 from Dow Corning. Pressure sensing was studied using a cylindrical metal post placed on an array of points across the skin. The polymer fibre grating exhibits approximately 10 times the pressure sensitivity of the silica fibre and responds to the post even when it is placed a few centimetres away from the sensing fibre. Although the intrinsic strain sensitivities of gratings in the two fibre types are very similar, when embedded in the skin the polymer grating displayed a strain sensitivity approximately 45 times greater than the silica device, which also suffered from considerable hysteresis. The polymer grating displayed a near linear response over wavelength shifts of 9nm for 1% strain. The difference in behaviour we attribute to the much greater Young's modulus of the silica fibre (70 GPa) compared to the polymer fibre (3 GPa).
Resumo:
A novel distributed amplification scheme for quasi-lossless transmission is presented. The system is studied numerically and shown to be able to strongly reduce signal power variations in comparison with currently employed schemes of similar complexity. As an example, variations of less than 3.1 dB for 100 km distance between pumps and below 0.42 dB for 60 km are obtained when using standard single-mode fibre as the transmission medium with an input signal average power of 0 dBm, and a total pump power of about 1.7 W. © 2004 Optical Society of America.
Resumo:
Conventional DEA models assume deterministic, precise and non-negative data for input and output observations. However, real applications may be characterized by observations that are given in form of intervals and include negative numbers. For instance, the consumption of electricity in decentralized energy resources may be either negative or positive, depending on the heat consumption. Likewise, the heat losses in distribution networks may be within a certain range, depending on e.g. external temperature and real-time outtake. Complementing earlier work separately addressing the two problems; interval data and negative data; we propose a comprehensive evaluation process for measuring the relative efficiencies of a set of DMUs in DEA. In our general formulation, the intervals may contain upper or lower bounds with different signs. The proposed method determines upper and lower bounds for the technical efficiency through the limits of the intervals after decomposition. Based on the interval scores, DMUs are then classified into three classes, namely, the strictly efficient, weakly efficient and inefficient. An intuitive ranking approach is presented for the respective classes. The approach is demonstrated through an application to the evaluation of bank branches. © 2013.
Resumo:
The paper presents an approach to extraction of facts from texts of documents. This approach is based on using knowledge about the subject domain, specialized dictionary and the schemes of facts that describe fact structures taking into consideration both semantic and syntactic compatibility of elements of facts. Actually extracted facts combine into one structure the dictionary lexical objects found in the text and match them against concepts of subject domain ontology.
Resumo:
The paper has been presented at the 12th International Conference on Applications of Computer Algebra, Varna, Bulgaria, June, 2006
Resumo:
1 Supported in part by the Norwegian Research Council for Science and the Humanities. It is a pleasure for this author to thank the Department of Mathematics of the University of Sofia for organizing the remarkable conference in Zlatograd during the period August 28-September 2, 1995. It is also a pleasure to thank the M.I.T. Department of Mathematics for its hospitality from January 1 to July 31, 1993, when this work was started. 2Supported in part by NSF grant 9400918-DMS.
Resumo:
2000 Mathematics Subject Classification: 11T06, 13P10.
Resumo:
Agents inhabiting large scale environments are faced with the problem of generating maps by which they can navigate. One solution to this problem is to use probabilistic roadmaps which rely on selecting and connecting a set of points that describe the interconnectivity of free space. However, the time required to generate these maps can be prohibitive, and agents do not typically know the environment in advance. In this paper we show that the optimal combination of different point selection methods used to create the map is dependent on the environment, no point selection method dominates. This motivates a novel self-adaptive approach for an agent to combine several point selection methods. The success rate of our approach is comparable to the state of the art and the generation cost is substantially reduced. Self-adaptation therefore enables a more efficient use of the agent's resources. Results are presented for both a set of archetypal scenarios and large scale virtual environments based in Second Life, representing real locations in London.
Resumo:
This paper presents a novel approach to the computation of primitive geometrical structures, where no prior knowledge about the visual scene is available and a high level of noise is expected. We based our work on the grouping principles of proximity and similarity, of points and preliminary models. The former was realized using Minimum Spanning Trees (MST), on which we apply a stable alignment and goodness of fit criteria. As for the latter, we used spectral clustering of preliminary models. The algorithm can be generalized to various model fitting settings, without tuning of run parameters. Experiments demonstrate the significant improvement in the localization accuracy of models in plane, homography and motion segmentation examples. The efficiency of the algorithm is not dependent on fine tuning of run parameters like most others in the field.
Resumo:
We present a test for identifying clusters in high dimensional data based on the k-means algorithm when the null hypothesis is spherical normal. We show that projection techniques used for evaluating validity of clusters may be misleading for such data. In particular, we demonstrate that increasingly well-separated clusters are identified as the dimensionality increases, when no such clusters exist. Furthermore, in a case of true bimodality, increasing the dimensionality makes identifying the correct clusters more difficult. In addition to the original conservative test, we propose a practical test with the same asymptotic behavior that performs well for a moderate number of points and moderate dimensionality. ACM Computing Classification System (1998): I.5.3.
Resumo:
2000 Mathematics Subject Classification: 30C40, 30D50, 30E10, 30E15, 42C05.
Resumo:
2000 Mathematics Subject Classification: 14Q05, 14Q15, 14R20, 14D22.
Resumo:
2000 Mathematics Subject Classification: Primary 17A50, Secondary 16R10, 17A30, 17D25, 17C50.
Resumo:
2010 Mathematics Subject Classification: 33C45, 40G05.