999 resultados para Scenario Programming, Markup Languages, 3D Virtualworlds
Resumo:
Map algebra is a data model and simple functional notation to study the distribution and patterns of spatial phenomena. It uses a uniform representation of space as discrete grids, which are organized into layers. This paper discusses extensions to map algebra to handle neighborhood operations with a new data type called a template. Templates provide general windowing operations on grids to enable spatial models for cellular automata, mathematical morphology, and local spatial statistics. A programming language for map algebra that incorporates templates and special processing constructs is described. The programming language is called MapScript. Example program scripts are presented to perform diverse and interesting neighborhood analysis for descriptive, model-based and processed-based analysis.
Resumo:
Geographical information systems (GIS) coupled to 3D visualisation technology is an emerging tool for urban planning and landscape design applications. The utility of 3D GIS for realistically visualising the built environment and proposed development scenarios is much advocated in the literature. Planners assess the merits of proposed changes using visual impact assessment (VIA). We have used Arcview GIS and visualisation software: called PolyTRIM from the University of Toronto, Centre for Landscape Research (CLR) to create a 3D scene for the entrance to a University campus. The paper investigates the thesis that to facilitate VIA in planning and design requires not only visualisation, but also a structured evaluation technique (Delphi) to arbitrate the decision-making process. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
This paper presents the multi-threading and internet message communication capabilities of Qu-Prolog. Message addresses are symbolic and the communications package provides high-level support that completely hides details of IP addresses and port numbers as well as the underlying TCP/IP transport layer. The combination of the multi-threads and the high level inter-thread message communications provide simple, powerful support for implementing internet distributed intelligent applications.
Resumo:
In this paper we describe a distributed object oriented logic programming language in which an object is a collection of threads deductively accessing and updating a shared logic program. The key features of the language, such as static and dynamic object methods and multiple inheritance, are illustrated through a series of small examples. We show how we can implement object servers, allowing remote spawning of objects, which we can use as staging posts for mobile agents. We give as an example an information gathering mobile agent that can be queried about the information it has so far gathered whilst it is gathering new information. Finally we define a class of co-operative reasoning agents that can do resource bounded inference for full first order predicate logic, handling multiple queries and information updates concurrently. We believe that the combination of the concurrent OO and the LP programming paradigms produces a powerful tool for quickly implementing rational multi-agent applications on the internet.
Resumo:
The long short-term memory (LSTM) is not the only neural network which learns a context sensitive language. Second-order sequential cascaded networks (SCNs) are able to induce means from a finite fragment of a context-sensitive language for processing strings outside the training set. The dynamical behavior of the SCN is qualitatively distinct from that observed in LSTM networks. Differences in performance and dynamics are discussed.
Resumo:
A decision theory framework can be a powerful technique to derive optimal management decisions for endangered species. We built a spatially realistic stochastic metapopulation model for the Mount Lofty Ranges Southern Emu-wren (Stipiturus malachurus intermedius), a critically endangered Australian bird. Using diserete-time Markov,chains to describe the dynamics of a metapopulation and stochastic dynamic programming (SDP) to find optimal solutions, we evaluated the following different management decisions: enlarging existing patches, linking patches via corridors, and creating a new patch. This is the first application of SDP to optimal landscape reconstruction and one of the few times that landscape reconstruction dynamics have been integrated with population dynamics. SDP is a powerful tool that has advantages over standard Monte Carlo simulation methods because it can give the exact optimal strategy for every landscape configuration (combination of patch areas and presence of corridors) and pattern of metapopulation occupancy, as well as a trajectory of strategies. It is useful when a sequence of management actions can be performed over a given time horizon, as is the case for many endangered species recovery programs, where only fixed amounts of resources are available in each time step. However, it is generally limited by computational constraints to rather small networks of patches. The model shows that optimal metapopulation, management decisions depend greatly on the current state of the metapopulation,. and there is no strategy that is universally the best. The extinction probability over 30 yr for the optimal state-dependent management actions is 50-80% better than no management, whereas the best fixed state-independent sets of strategies are only 30% better than no management. This highlights the advantages of using a decision theory tool to investigate conservation strategies for metapopulations. It is clear from these results that the sequence of management actions is critical, and this can only be effectively derived from stochastic dynamic programming. The model illustrates the underlying difficulty in determining simple rules of thumb for the sequence of management actions for a metapopulation. This use of a decision theory framework extends the capacity of population viability analysis (PVA) to manage threatened species.
Resumo:
Most external assessments of cervical range of motion assess the upper and lower cervical regions simultaneously. This study investigated the within and between days reliability of the clinical method used to bias this movement to the upper cervical region, namely measuring rotation of the head and neck in a position of full cervical flexion. Measurements were made using the Fastrak measurement system and were conducted by one operator. Results indicated high levels of within and between days repeatability (range of ICC2,1 values: 0.85-0.95). The ranges of axial rotation to right and left, measured with the neck positioned in full flexion, were approximately 56% and 50%, respectively of total cervical rotation, which relates well to the proportional division of rotation in the upper and lower cervical regions. These results suggest that this method of measuring rotation would be appropriate for use in subject studies where movement dysfunction is present in the upper cervical region, such as those with cervicogenic headache. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
O intuito inicial desta pesquisa foi acompanhar processos de trabalho à luz do referencial teórico da Ergologia e, portanto, concebendo o trabalho como relação dialética entre técnica e ação humana. O objetivo era cartografar o trabalho no processo de beneficiamento de granitos em uma organização de grande porte localizada no Espírito Santo e, após algum tempo em campo, o problema delineou- se do seguinte modo: como se constitui a competência industriosa no beneficiamento de granitos em uma organização de grande porte? A pesquisa justifica-se uma vez que, a despeito da relevância econômica, o cenário capixaba de rochas ornamentais apresenta problemas precários no que diz respeito à gestão. Para os Estudos Organizacionais, a relevância é reforçada pelo fato de aproximar desta área a abordagem ergológica e demarcar no debate sobre competência a noção de competência industriosa, ainda não explorada nesse campo de estudo. Para realização da pesquisa, foi praticada uma cartografia ergológica, a partir da articulação das pistas cartográficas com o referencial teórico-conceitual da Ergologia, sendo utilizadas como técnicas: observação participante durante 6 meses, com uma média de 3 visitas a campo por semana; 8 entrevistas semiestruturadas e em profundidade de cerca de 50 minutos cada com trabalhadores operacionais; uma entrevista com gerente de produção e outra com representante da área de Gestão de Pessoas; conversas com os demais trabalhadores, a fim de enriquecer o diário de campo; novas conversas e observações ao final da análise, para confrontação-validação com os trabalhadores. A sistematização dos procedimentos de análise pode ser assim descrita: a) leituras flutuantes com objetivo de fazer emergirem aspectos centrais relacionados às duas dimensões do trabalho, técnica e ação humana; b) leituras em profundidade com objetivo de fazer emergirem singularidades e especificidades relativas à dialética entre ambas; c) leituras em profundidade com objetivo de fazer emergirem aspectos relativos aos ingredientes da competência industriosa. A despeito da não delimitação de categorias analíticas e subcategorias, a partir da análise emergiram cinco eixos analíticos: 1) os procedimentos a serem empregados no processo de beneficiamento de granitos, englobando: as etapas do beneficiamento; as funções a serem desempenhadas e as tarefas a serem desenvolvidas; as normas regulamentadoras; os conhecimentos técnicos necessários para programação e operação de máquinas; a ordem de produção prescrita pelo setor comercial; 2) o trabalho real, diferenciado do trabalho como emprego de procedimentos pelo foco dado à ação humana no enfrentamento de situações reais, repletas de eventos e variabilidades, em todo o processo, englobando: preparo de carga; laminação; serrada; levigamento; resinagem; polimento-classificação; retoque; fechamento de pacote; ovada de contêiner; 3) diferentes modos de usos de si que, em tendência, são responsáveis pela constituição do agir em competência em cada etapa do processo, na dialética entre técnica e ação humana; 4) o modo como cada ingrediente da competência industriosa atua e se constitui, bem como sua concentração, em tendência, em cada etapa do processo, a partir dos tipos de usos de si que, também em tendência, são mais responsáveis pelo agir em competência, apresentando assim o perfil da competência industriosa no beneficiamento de granitos na empresa em análise; 5) dois possíveis fatores potencializadores dos ingredientes da competência industriosa, a saber, a transdução e os não-humanos. A partir de todo o exposto, as últimas considerações problematizam aspectos relativos ao debate sobre competências e práticas de gestão de pessoas a partir da competência compreendida da seguinte forma: mestria no ato de tirar partido do meio e de si para gerir situações de trabalho, em que a ação consiste na mobilização de recursos dificilmente perceptíveis e descritíveis, inerentes ao trabalhador, porém constituídos e manifestos por usos de si por si e pelos outros no e para o ato real de trabalho, marcadamente num nível infinitesimal, diante de situações que demandam aplicação de protocolos concomitante à gestão de variabilidades e eventos em parte inantecipáveis e inelimináveis.
Resumo:
More and more current software systems rely on non trivial coordination logic for combining autonomous services typically running on different platforms and often owned by different organizations. Often, however, coordination data is deeply entangled in the code and, therefore, difficult to isolate and analyse separately. COORDINSPECTOR is a software tool which combines slicing and program analysis techniques to isolate all coordination elements from the source code of an existing application. Such a reverse engineering process provides a clear view of the actually invoked services as well as of the orchestration patterns which bind them together. The tool analyses Common Intermediate Language (CIL) code, the native language of Microsoft .Net Framework. Therefore, the scope of application of COORDINSPECTOR is quite large: potentially any piece of code developed in any of the programming languages which compiles to the .Net Framework. The tool generates graphical representations of the coordination layer together and identifies the underlying business process orchestrations, rendering them as Orc specifications
Resumo:
In the last years, it has become increasingly clear that neurodegenerative diseases involve protein aggregation, a process often used as disease progression readout and to develop therapeutic strategies. This work presents an image processing tool to automatic segment, classify and quantify these aggregates and the whole 3D body of the nematode Caenorhabditis Elegans. A total of 150 data set images, containing different slices, were captured with a confocal microscope from animals of distinct genetic conditions. Because of the animals’ transparency, most of the slices pixels appeared dark, hampering their body volume direct reconstruction. Therefore, for each data set, all slices were stacked in one single 2D image in order to determine a volume approximation. The gradient of this image was input to an anisotropic diffusion algorithm that uses the Tukey’s biweight as edge-stopping function. The image histogram median of this outcome was used to dynamically determine a thresholding level, which allows the determination of a smoothed exterior contour of the worm and the medial axis of the worm body from thinning its skeleton. Based on this exterior contour diameter and the medial animal axis, random 3D points were then calculated to produce a volume mesh approximation. The protein aggregations were subsequently segmented based on an iso-value and blended with the resulting volume mesh. The results obtained were consistent with qualitative observations in literature, allowing non-biased, reliable and high throughput protein aggregates quantification. This may lead to a significant improvement on neurodegenerative diseases treatment planning and interventions prevention
Resumo:
Pectus Carinatum (PC) is a chest deformity consisting on the anterior protrusion of the sternum and adjacent costal cartilages. Non-operative corrections, such as the orthotic compression brace, require previous information of the patient chest surface, to improve the overall brace fit. This paper focuses on the validation of the Kinect scanner for the modelling of an orthotic compression brace for the correction of Pectus Carinatum. To this extent, a phantom chest wall surface was acquired using two scanner systems – Kinect and Polhemus FastSCAN – and compared through CT. The results show a RMS error of 3.25mm between the CT data and the surface mesh from the Kinect sensor and 1.5mm from the FastSCAN sensor
Resumo:
Image segmentation is an ubiquitous task in medical image analysis, which is required to estimate morphological or functional properties of given anatomical targets. While automatic processing is highly desirable, image segmentation remains to date a supervised process in daily clinical practice. Indeed, challenging data often requires user interaction to capture the required level of anatomical detail. To optimize the analysis of 3D images, the user should be able to efficiently interact with the result of any segmentation algorithm to correct any possible disagreement. Building on a previously developed real-time 3D segmentation algorithm, we propose in the present work an extension towards an interactive application where user information can be used online to steer the segmentation result. This enables a synergistic collaboration between the operator and the underlying segmentation algorithm, thus contributing to higher segmentation accuracy, while keeping total analysis time competitive. To this end, we formalize the user interaction paradigm using a geometrical approach, where the user input is mapped to a non-cartesian space while this information is used to drive the boundary towards the position provided by the user. Additionally, we propose a shape regularization term which improves the interaction with the segmented surface, thereby making the interactive segmentation process less cumbersome. The resulting algorithm offers competitive performance both in terms of segmentation accuracy, as well as in terms of total analysis time. This contributes to a more efficient use of the existing segmentation tools in daily clinical practice. Furthermore, it compares favorably to state-of-the-art interactive segmentation software based on a 3D livewire-based algorithm.