992 resultados para Geometric framework
Resumo:
This paper proposes an alternative geometric framework for analysing the inter-relationship between domestic saving, productivity and income determination in discrete time. The framework provides a means of understanding how low saving economies like the United States sustained high growth rates in the 1990s whereas high saving Japan did not. It also illustrates how the causality between saving and economic activity runs both ways and that discrete changes in national output and income depend on both current and previous accumulation behaviour. The open economy analogue reveals how international capital movements can create external account imbalances that enhance income growth for both borrower and lender economies. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
We propose a segmentation method based on the geometric representation of images as 2-D manifolds embedded in a higher dimensional space. The segmentation is formulated as a minimization problem, where the contours are described by a level set function and the objective functional corresponds to the surface of the image manifold. In this geometric framework, both data-fidelity and regularity terms of the segmentation are represented by a single functional that intrinsically aligns the gradients of the level set function with the gradients of the image and results in a segmentation criterion that exploits the directional information of image gradients to overcome image inhomogeneities and fragmented contours. The proposed formulation combines this robust alignment of gradients with attractive properties of previous methods developed in the same geometric framework: 1) the natural coupling of image channels proposed for anisotropic diffusion and 2) the ability of subjective surfaces to detect weak edges and close fragmented boundaries. The potential of such a geometric approach lies in the general definition of Riemannian manifolds, which naturally generalizes existing segmentation methods (the geodesic active contours, the active contours without edges, and the robust edge integrator) to higher dimensional spaces, non-flat images, and feature spaces. Our experiments show that the proposed technique improves the segmentation of multi-channel images, images subject to inhomogeneities, and images characterized by geometric structures like ridges or valleys.
Resumo:
We present a unified geometric framework for describing both the Lagrangian and Hamiltonian formalisms of regular and non-regular time-dependent mechanical systems, which is based on the approach of Skinner and Rusk (1983). The dynamical equations of motion and their compatibility and consistency are carefully studied, making clear that all the characteristics of the Lagrangian and the Hamiltonian formalisms are recovered in this formulation. As an example, it is studied a semidiscretization of the nonlinear wave equation proving the applicability of the proposed formalism.
Resumo:
Esta investigación indaga sobre la relación entre el método geométrico empleado por Pablo Palazuelo y el proceso del proyecto arquitectónico. La elección de este pintor y escultor madrileño como hilo conductor de esta tesis no es fortuita, puesto que la arquitectura desempeña una influencia esencial sobre su obra. Un influjo que le llega en parte a través de su formación académica, dado que cursó estudios de arquitectura en la School of Arts and Crafts de la ciudad de Oxford (1933-1936). Así mismo diseñó propuestas estrechamente vinculadas a un lugar construido, con el consiguiente condicionante de las trazas del mismo. La hipótesis de trabajo formulada a partir de textos elaborados por autores como Víctor Nieto Alcaide y Juan Daniel Fullaondo sugería una interconexión con la arquitectura orgánica. Como comprobación del grado de profundidad en otros análisis publicados, se han seleccionado los textos que indagan en el proceso que el artista desarrollaba durante la producción de su obra, y se adentran en cuestiones estructurales que trascienden el ámbito formal. Siguiendo esta pauta, además de una acotación temporal, se han escogido los realizados por Santiago Amón, Carme Bonell, Valerio Bozal, Manuel J. Borja-Villel, Francisco Calvo Serraller, Claude Esteban, Julián Gállego, Teresa Grandas, Max Hölzer, George Limbour, Kevin Power y Carlos Rodríguez-Spiteri. A estos autores se suman las fuentes orales consultadas dentro de un entorno intrínsecamente próximo a sus realizaciones, procedentes de Pere Casanovas y Soledad Lorenzo. Además de personas que en diferentes etapas de su vida coincidieron por distintos motivos con sus realizaciones, como son Ramón Ayerza, Mariano Bayón, José Antonio Corrales, Luis Gordillo, Rafael Moneo, José Rodríguez-Spiteri y Antonio Tornero. A partir del acceso obtenido a los escritos, libros, bocetos y abundante obra gráfica y escultórica que atesora la Fundación del pintor, se ha podido elaborar un andamiaje tanto teórico como geométrico que ha servido de base para confrontar estas premisas. Esta empresa se ha estructurado en una narración que, además de los estudios precedentes citados, comienza con los cimientos del pensamiento de Palazuelo. Elaborada a partir de sus escritos, donde defendía un sincretismo que concilia las visiones de las culturas occidental y oriental. En los siguientes apartados, se han analizado las principales obras gráficas y escultóricas del autor haciendo un especial hincapié en el método productivo. Una gestación que se resiste a una mera enumeración cronológica, por lo que la clasificación que se propone en este trabajo trata de ser lo más fiel posible al espíritu expresado por Palazuelo basado en linajes y coherencias, para desvelar las herramientas empleadas y poder compararlas con el proceso del proyectual. Este recorrido se completa con una última sección se reúnen por primera vez las dieciséis obras y los dieciséis proyectos más representativos que ilustran la aproximación más directa que obró Palazuelo entre sus investigaciones geométricas y un locus determinado. Durante casi cuatro años se desarrolló un inventariado y catalogación pormenorizada de la documentación y piezas sobre papel, lienzo y metal realizadas por Palazuelo. Esta indagación saca a la luz un conjunto constituido por casi cuatro mil obras, en su mayoría inéditas, que constituyen el archivo de la citada institución. En definitiva, esta investigación construye un tejido gráfico y geométrico referido a uno de los artistas españoles más importantes del siglo XX, entreverado por su pensamiento teórico y realizaciones en dibujos, maquetas, esculturas y propuestas arquitectónicas. Las cuales permiten establecer los acuerdos y desacuerdos con el proceso de la arquitectura para proponer una nueva aproximación geométrica interdisciplinar. ABSTRACT This research investigates the relationship between the geometric method used by Palazuelo and the architectural design’s process. Choosing this Spanish painter and sculptor as thread of this thesis is not fortuitous, since the architecture has an essential influence on his work. An influx that arrives in part through his academic training, as he was an architecture student at the School of Arts and Crafts of the city of Oxford (1933-1936). Furthermore his proposals designed closely linked to a built place, therefore conditioned by its traces. The working hypothesis formulated from texts written by authors like Victor Nieto Alcaide and Juan Daniel Fullaondo suggested an interconnection with organic architecture. As a check on the degree of depth in other published reviews, articles that explore the process that the artist developed during the production of his work, and penetrate into structural issues beyond formal domain have been selected. Following this pattern, along with a temporal dimension, assays by Santiago Amón, Carme Bonell, Valerio Bozal, Manuel J. Borja-Villel, Francisco Calvo Serraller, Claude Esteban, Julián Gállego, Teresa Grandas, Max Hölzer, George Limbour, Kevin Power and Carlos Rodriguez-Spiteri have been selected. Oral sources within an inherently environment close to his achievements, as Pere Casanovas and Soledad Lorenzo, are also added. In addition to people coincided with his accomplishments, such as Ramón Ayerza, Mariano Bayón, José Antonio Corrales, Luis Gordillo, Rafael Moneo, José Rodríguez-Spiteri and Antonio Tornero. From obtained access to the writings, books, sketches and abundant graphic and sculptural work that holds the Foundation painter, it has been able to develop a theoretical and geometric framework that have served as the basis for confronting these premises. This dissertation has been structured in a narrative that ⎯in addition to the previously mentioned studies⎯, begins with the foundations of Palazuelo thought. A structure built from his writings, where he defended a syncretism that reconciles the views of Western and Eastern cultures. In the following sections, his main graphic and sculptural works have been analyzed with particular emphasis on the productive method. A process that resists mere chronological enumeration, so the classification proposed in this investigation tries to be as faithful as possible to the spirit expressed by Palazuelo, based on bloodlines and coherences, to uncover the tools he used and to compare them with the architectural design process. This tour is completed with a final chapter that gathers the sixteen proposals and sixteen works most representative projects that illustrate the more direct approach that Palazuelo worked between geometric investigations and a given locus. For nearly four years, a detailed inventory and cataloguing of documents and works on paper, canvas and metal made by Palazuelo was developed. This research brings to light a set consisting of nearly four thousand works, mostly unpublished, that constitute the current archive of the aforementioned institution. Ultimately, this research builds a graph and geometric fabric referred to one of the most important Spanish artists of the twentieth century, interspersed by his theoretical thinking and achievements in drawings, models, sculptures and architectural proposals. Which allow establishing agreements and disagreements with the process of architecture to propose a new geometric interdisciplinary approach.
Resumo:
Organisms from slime moulds to humans carefully regulate their macronutrient intake to optimize a wide range of life history characters including survival, stress resistance, and reproductive success. However, life history characters often differ in their response to nutrition, forcing organisms to make foraging decisions while balancing the trade-offs between these effects. To date, we have a limited understanding of how the nutritional environment shapes the relationship between life history characters and foraging decisions. To gain insight into the problem, we used a geometric framework for nutrition to assess how the protein and carbohydrate content of the larval diet affected key life history traits in the fruit fly, Drosophila melanogaster. In no-choice assays, survival from egg to pupae, female and male body size, and ovariole number - a proxy for female fecundity - were maximized at the highest protein to carbohydrate (P:C) ratio (1.5:1). In contrast, development time was minimized at intermediate P:C ratios, around 1:2. Next, we subjected larvae to two-choice tests to determine how they regulated their protein and carbohydrate intake in relation to these life history traits. Our results show that larvae targeted their consumption to P:C ratios that minimized development time. Finally, we examined whether adult females also chose to lay their eggs in the P:C ratios that minimized developmental time. Using a three-choice assay, we found that adult females preferentially laid their eggs in food P:C ratios that were suboptimal for all larval life history traits. Our results demonstrate that D. melanogaster larvae make foraging decisions that trade-off developmental time with body size, ovariole number, and survival. In addition, adult females make oviposition decisions that do not appear to benefit the larvae. We propose that these decisions may reflect the living nature of the larval nutritional environment in rotting fruit. These studies illustrate the interaction between the nutritional environment, life history traits, and foraging choices in D. melanogaster, and lend insight into the ecology of their foraging decisions.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Acknowledgements JRS was supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (grant XDB13030000), a ‘1000 talents’ professorship from the Ministry of Science and Technology (MOST) of the Chinese government, and a Wolfson award from the Royal Society. SEM was supported by the US National Institute of Health grant R01AG043972 and MM was supported by a TWAS studentship of the Chinese Academy of Sciences, during the preparation of this manuscript. We are grateful to three anonymous referees for their constructive and helpful comments.
Resumo:
The Montreal Process indicators are intended to provide a common framework for assessing and reviewing progress toward sustainable forest management. The potential of a combined geometrical-optical/spectral mixture analysis model was assessed for mapping the Montreal Process age class and successional age indicators at a regional scale using Landsat Thematic data. The project location is an area of eucalyptus forest in Emu Creek State Forest, Southeast Queensland, Australia. A quantitative model relating the spectral reflectance of a forest to the illumination geometry, slope, and aspect of the terrain surface and the size, shape, and density, and canopy size. Inversion of this model necessitated the use of spectral mixture analysis to recover subpixel information on the fractional extent of ground scene elements (such as sunlit canopy, shaded canopy, sunlit background, and shaded background). Results obtained fron a sensitivity analysis allowed improved allocation of resources to maximize the predictive accuracy of the model. It was found that modeled estimates of crown cover projection, canopy size, and tree densities had significant agreement with field and air photo-interpreted estimates. However, the accuracy of the successional stage classification was limited. The results obtained highlight the potential for future integration of high and moderate spatial resolution-imaging sensors for monitoring forest structure and condition. (C) Elsevier Science Inc., 2000.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
The future of high technology welded constructions will be characterised by higher strength materials and improved weld quality with respect to fatigue resistance. The expected implementation of high quality high strength steel welds will require that more attention be given to the issues of crack initiation and mechanical mismatching. Experiments and finite element analyses were performed within the framework of continuum damage mechanics to investigate the effect of mismatching of welded joints on void nucleation and coalescence during monotonic loading. It was found that the damage of undermatched joints mainly occurred in the sandwich layer and the damageresistance of the joints decreases with the decrease of the sandwich layer width. The damage of over-matched joints mainly occurred in the base metal adjacent to the sandwich layer and the damage resistance of the joints increases with thedecrease of the sandwich layer width. The mechanisms of the initiation of the micro voids/cracks were found to be cracking of the inclusions or the embrittled second phase, and the debonding of the inclusions from the matrix. Experimental fatigue crack growth rate testing showed that the fatigue life of under-matched central crack panel specimens is longer than that of over-matched and even-matched specimens. Further investigation by the elastic-plastic finite element analysis indicated that fatigue crack closure, which originated from the inhomogeneousyielding adjacent to the crack tip, played an important role in the fatigue crack propagation. The applicability of the J integral concept to the mismatched specimens with crack extension under cyclic loading was assessed. The concept of fatigue class used by the International Institute of Welding was introduced in the parametric numerical analysis of several welded joints. The effect of weld geometry and load condition on fatigue strength of ferrite-pearlite steel joints was systematically evaluated based on linear elastic fracture mechanics. Joint types included lap joints, angle joints and butt joints. Various combinations of the tensile and bending loads were considered during the evaluation with the emphasis focused on the existence of both root and toe cracks. For a lap joint with asmall lack-of-penetration, a reasonably large weld leg and smaller flank angle were recommended for engineering practice in order to achieve higher fatigue strength. It was found that the fatigue strength of the angle joint depended strongly on the location and orientation of the preexisting crack-like welding defects, even if the joint was welded with full penetration. It is commonly believed that the double sided butt welds can have significantly higher fatigue strength than that of a single sided welds, but fatigue crack initiation and propagation can originate from the weld root if the welding procedure results in a partial penetration. It is clearly shown that the fatigue strength of the butt joint could be improved remarkably by ensuring full penetration. Nevertheless, increasing the fatigue strength of a butt joint by increasing the size of the weld is an uneconomical alternative.
Resumo:
The description of patterns of variation in any character system within well-defined species is fundamental for understanding lineage diversification and the identification of geographic units that represent opportunities for sustained evolutionary divergence. In this paper, we analyze intraspecific variation in cranial shape in the Pumpkin Toadlet, Brachycephalus ephippium-a miniaturized species composed of isolated populations on the slopes of the mountain ranges of southeastern Brazil. Shape variables were derived using geometric-statistical methods that describe shape change as localized deformations in a spatial framework defined by anatomical landmarks in the cranium of B. ephippium. By statistically weighting differences between landmarks that are not close together (changes at larger geometric scale), cranial variation among geographic samples of B. ephippium appears continuous with no obvious gaps. This pattern of variation is caused by a confounding effect between within-sample allometry and among-sample shape differences. In contrast, by statistically weighting differences between landmarks that are at close spacing (changes at smaller geometric scale), differences in shape within- and among-sample variation are not confounded, and a marked geographic differentiation among population samples of B. ephippium emerges. The observed pattern of geographic differentiation in cranial shape apparently cannot be explained as isolation-by-distance. This study provides the first evidence that the detection of morphological variation or lack thereof, that is, morphological conservatism, may be conditional on the scale of measurement of variation in shape within the methodological formalism of geometric morphometrics.
Resumo:
In different regions of Brazil, population growth and economic development can degrade water quality, compromising watershed health and human supply. Because of its ability to combine spatial and temporal data in the same environment and to create water resources management (WRM) models, the Geographical Information System (GIS) is a powerful tool for managing water resources, preventing floods and estimating water supply. This paper discusses the integration between GIS and hydrological models and presents a case study relating to the upper section of the Paraíba do Sul Basin (Sao Paulo State portion), situated in the Southeast of Brazil. The case study presented in this paper has a database suitable for the basin's dimensions, including digitized topographic maps at a 50,000 scale. From an ArcGIS®/ArcHydro Framework Data Model, a geometric network was created to produce different raster products. This first grid derived from the digital elevation model grid (DEM) is the flow direction map followed by flow accumulation, stream and catchment maps. The next steps in this research are to include the different multipurpose reservoirs situated along the Paraíba do Sul River and to incorporate rainfall time series data in ArcHydro to build a hydrologic data model within a GIS environment in order to produce a comprehensive spatial-temporal model.
Resumo:
In this paper, we proposed a new three-parameter long-term lifetime distribution induced by a latent complementary risk framework with decreasing, increasing and unimodal hazard function, the long-term complementary exponential geometric distribution. The new distribution arises from latent competing risk scenarios, where the lifetime associated scenario, with a particular risk, is not observable, rather we observe only the maximum lifetime value among all risks, and the presence of long-term survival. The properties of the proposed distribution are discussed, including its probability density function and explicit algebraic formulas for its reliability, hazard and quantile functions and order statistics. The parameter estimation is based on the usual maximum-likelihood approach. A simulation study assesses the performance of the estimation procedure. We compare the new distribution with its particular cases, as well as with the long-term Weibull distribution on three real data sets, observing its potential and competitiveness in comparison with some usual long-term lifetime distributions.
Resumo:
Currently photon Monte Carlo treatment planning (MCTP) for a patient stored in the patient database of a treatment planning system (TPS) can usually only be performed using a cumbersome multi-step procedure where many user interactions are needed. This means automation is needed for usage in clinical routine. In addition, because of the long computing time in MCTP, optimization of the MC calculations is essential. For these purposes a new graphical user interface (GUI)-based photon MC environment has been developed resulting in a very flexible framework. By this means appropriate MC transport methods are assigned to different geometric regions by still benefiting from the features included in the TPS. In order to provide a flexible MC environment, the MC particle transport has been divided into different parts: the source, beam modifiers and the patient. The source part includes the phase-space source, source models and full MC transport through the treatment head. The beam modifier part consists of one module for each beam modifier. To simulate the radiation transport through each individual beam modifier, one out of three full MC transport codes can be selected independently. Additionally, for each beam modifier a simple or an exact geometry can be chosen. Thereby, different complexity levels of radiation transport are applied during the simulation. For the patient dose calculation, two different MC codes are available. A special plug-in in Eclipse providing all necessary information by means of Dicom streams was used to start the developed MC GUI. The implementation of this framework separates the MC transport from the geometry and the modules pass the particles in memory; hence, no files are used as the interface. The implementation is realized for 6 and 15 MV beams of a Varian Clinac 2300 C/D. Several applications demonstrate the usefulness of the framework. Apart from applications dealing with the beam modifiers, two patient cases are shown. Thereby, comparisons are performed between MC calculated dose distributions and those calculated by a pencil beam or the AAA algorithm. Interfacing this flexible and efficient MC environment with Eclipse allows a widespread use for all kinds of investigations from timing and benchmarking studies to clinical patient studies. Additionally, it is possible to add modules keeping the system highly flexible and efficient.
Resumo:
This paper presents a project for providing the students of Structural Engineering with the flexibility to learn outside classroom schedules. The goal is a framework for adaptive E-learning based on a repository of open educational courseware with a set of basic Structural Engineering concepts and fundamentals. These are paramount for students to expand their technical knowledge and skills in structural analysis and design of tall buildings, arch-type structures as well as bridges. Thus, concepts related to structural behaviour such as linearity, compatibility, stiffness and influence lines have traditionally been elusive for students. The objective is to facilitate the student a teachinglearning process to acquire the necessary intuitive knowledge, cognitive skills and the basis for further technological modules and professional development in this area. As a side effect, the system is expected to help the students improve their preparation for exams on the subject. In this project, a web-based open-source system for studying influence lines on continuous beams is presented. It encompasses a collection of interactive user-friendly applications accessible via Web, written in JavaScript under JQuery and Dygraph Libraries, taking advantage of their efficiency and graphic capabilities. It is performed in both Spanish and English languages. The student is enabled to set the geometric, topologic, boundary and mechanic layout of a continuous beam. While changing the loading and the support conditions, the changes in the beam response prompt on the screen, so that the effects of the several issues involved in structural analysis become apparent. This open interaction with the user allows the student to simulate and virtually infer the structural response. Different levels of complexity can be handled, whereas an ongoing help is at hand for any of them. Students can freely boost their experiential learning on this subject at their own pace, in order to further share, process, generalize and apply the relevant essential concepts of Structural Engineering analysis. Besides, this collection is being added to the "Virtual Lab of Continuum Mechanics" of the UPM, launched in 2013 (http://serviciosgate.upm.es/laboratoriosvirtuales/laboratorios/medios-continuos-en-construcci%C3%B3n)