952 resultados para Digital elevation model - DEM
Resumo:
Electrospinning (ES) can readily produce polymer fibers with cross-sectional dimensions ranging from tens of nanometers to tens of microns. Qualitative estimates of surface area coverage are rather intuitive. However, quantitative analytical and numerical methods for predicting surface coverage during ES have not been covered in sufficient depth to be applied in the design of novel materials, surfaces, and devices from ES fibers. This article presents a modeling approach to ES surface coverage where an analytical model is derived for use in quantitative prediction of surface coverage of ES fibers. The analytical model is used to predict the diameter of circular deposition areas of constant field strength and constant electrostatic force. Experimental results of polyvinyl alcohol fibers are reported and compared to numerical models to supplement the analytical model derived. The analytical model provides scientists and engineers a method for estimating surface area coverage. Both applied voltage and capillary-to-collection-plate separation are treated as independent variables for the analysis. The electric field produced by the ES process was modeled using COMSOL Multiphysics software to determine a correlation between the applied field strength and the size of the deposition area of the ES fibers. MATLAB scripts were utilized to combine the numerical COMSOL results with derived analytical equations. Experimental results reinforce the parametric trends produced via modeling and lend credibility to the use of modeling techniques for the qualitative prediction of surface area coverage from ES. (Copyright: 2014 American Vacuum Society.)
Resumo:
OBJECTIVES: To analyze computer-assisted diagnostics and virtual implant planning and to evaluate the indication for template-guided flapless surgery and immediate loading in the rehabilitation of the edentulous maxilla. MATERIALS AND METHODS: Forty patients with an edentulous maxilla were selected for this study. The three-dimensional analysis and virtual implant planning was performed with the NobelGuide software program (Nobel Biocare, Göteborg, Sweden). Prior to the computer tomography aesthetics and functional aspects were checked clinically. Either a well-fitting denture or an optimized prosthetic setup was used and then converted to a radiographic template. This allowed for a computer-guided analysis of the jaw together with the prosthesis. Accordingly, the best implant position was determined in relation to the bone structure and prospective tooth position. For all jaws, the hypothetical indication for (1) four implants with a bar overdenture and (2) six implants with a simple fixed prosthesis were planned. The planning of the optimized implant position was then analyzed as follows: the number of implants was calculated that could be placed in sufficient quantity of bone. Additional surgical procedures (guided bone regeneration, sinus floor elevation) that would be necessary due the reduced bone quality and quantity were identified. The indication of template-guided, flapless surgery or an immediate loaded protocol was evaluated. RESULTS: Model (a) - bar overdentures: for 28 patients (70%), all four implants could be placed in sufficient bone (total 112 implants). Thus, a full, flapless procedure could be suggested. For six patients (15%), sufficient bone was not available for any of their planned implants. The remaining six patients had exhibited a combination of sufficient or insufficient bone. Model (b) - simple fixed prosthesis: for 12 patients (30%), all six implants could be placed in sufficient bone (total 72 implants). Thus, a full, flapless procedure could be suggested. For seven patients (17%), sufficient bone was not available for any of their planned implants. The remaining 21 patients had exhibited a combination of sufficient or insufficient bone. DISCUSSION: In the maxilla, advanced atrophy is often observed, and implant placement becomes difficult or impossible. Thus, flapless surgery or an immediate loading protocol can be performed just in a selected number of patients. Nevertheless, the use of a computer program for prosthetically driven implant planning is highly efficient and safe. The three-dimensional view of the maxilla allows the determination of the best implant position, the optimization of the implant axis, and the definition of the best surgical and prosthetic solution for the patient. Thus, a protocol that combines a computer-guided technique with conventional surgical procedures becomes a promising option, which needs to be further evaluated and improved.
Resumo:
The study is based on experimental work conducted in alpine snow. We made microwave radiometric and near-infrared reflectance measurements of snow slabs under different experimental conditions. We used an empirical relation to link near-infrared reflectance of snow to the specific surface area (SSA), and converted the SSA into the correlation length. From the measurements of snow radiances at 21 and 35 GHz , we derived the microwave scattering coefficient by inverting two coupled radiative transfer models (the sandwich and six-flux model). The correlation lengths found are in the same range as those determined in the literature using cold laboratory work. The technique shows great potential in the determination of the snow correlation length under field conditions.
Resumo:
In den letzten Jahren wurde die Vision einer Welt smarter Alltagsgegenstände unter den Begriffen wie Ubiquitous Computing, Pervasive Computing und Ambient Intelligence in der Öffentlichkeit wahrgenommen. Die smarten Gegenstände sollen mit digitaler Logik, Sensorik und der Möglichkeit zur Vernetzung ausgestattet werden. Somit bilden sie ein „Internet der Dinge“, in dem der Computer als eigenständiges Gerät verschwindet und in den Objekten der physischen Welt aufgeht. Während auf der einen Seite die Vision des „Internet der Dinge“ durch die weiter anhaltenden Fortschritte in der Informatik, Mikroelektronik, Kommunikationstechnik und Materialwissenschaft zumindest aus technischer Sicht wahrscheinlich mittelfristig realisiert werden kann, müssen auf der anderen Seite die damit zusammenhängenden ökonomischen, rechtlichen und sozialen Fragen geklärt werden. Zur Weiterentwicklung und Realisierung der Vision des „Internet der Dinge“ wurde erstmals vom AutoID-Center das EPC-Konzept entwickelt, welches auf globale netzbasierte Informationsstandards setzt und heute von EPCglobal weiterentwickelt und umgesetzt wird. Der EPC erlaubt es, umfassende Produktinformationen über das Internet zur Verfügung zu stellen. Die RFID-Technologie stellt dabei die wichtigste Grundlage des „Internet der Dinge“ dar, da sie die Brücke zwischen der physischen Welt der Produkte und der virtuellen Welt der digitalen Daten schlägt. Die Objekte, die mit RFID-Transpondern ausgestattet sind, können miteinander kommunizieren und beispielsweise ihren Weg durch die Prozesskette finden. So können sie dann mit Hilfe der auf den RFID-Transpondern gespeicherten Informationen Förderanlagen oder sonstige Maschinen ohne menschliches Eingreifen selbstständig steuern.
Resumo:
Ziel der vorliegenden Arbeit ist, einen allgemeinen Überblick über die Wirkung von Computern auf die Kunstgeschichte zu geben. Zu Beginn der Arbeit wird der Charakter der informationstechnologischen Revolution untersucht, einschließlich seiner schon oft festgestellten Parallelen mit der "Gutenberg"- Revolution, deren Ausgangspunkt in der Entwicklung der Druckerpresse liegt. Wie auch bei Gutenberg, ist die Entwicklung der Informationstechnologie technologisch bedingt. Jedoch führt sie durch ihren Schwerpunkt auf Flexibilität und Verbreitung an ein anderes Ziel. Diese Flexibilität ist zweischneidig: während sie viele neue Möglichkeiten eröffnet, scheint sie auch einen bruchstückhafteren, iterativen Ansatz zur Untersuchung des Vorzugs von Information vor Wissen anzuregen. Es bleibt jedoch offen, ob dieser Ansatz als notwendige Konsequenz der Struktur dieser vorhandenen neuen Technologie betrachtet werden kann, oder ob er eher als Produkt eines allgemeinen intellektuellen Wandels, angeregt durch das Aufkommen des postmodernen Diskurses, beschrieben werden soll. Ich werde in dem vorliegenden Artikel für den zweitgenannten Grund argumentieren. Ich bin außerdem der Meinung, dass der in der neuen Technologie enthaltenen Tendenz zur Fragmentierung entgegengewirkt werden kann - vorausgesetzt der Wunsch besteht. Die Entwicklung des Computers hängt eng mit der Nachfrage des Konsumenten zusammen. Aus diesem Grund kann ein neuer Trend in der Nachfrage die Art der Ausweitung und Modifizierung technologischer Vorgänge mitbestimmen.Des weiteren werden in der vorliegenden Arbeit Problemstellungen diskutiert, die speziell Auswirkungen auf die Untersuchung von Bildern haben. Hierbei wird sowohl das Potential digitaler Bilder für neue Formen der Erforschung und Analyse diskutiert, als auch die vielen neuen Möglichkeiten im Zeitalter des Internets.
Resumo:
Der Artikel behandelt das Projektieren der Produkt-Service-Verbindung vom Standpunkt der Informationsintegration aus. Der Autor erläutert grundlegende Unterschiede zwischen dem traditionellen und dem modernen Operationsmanagementkonzept. Ergänzend wird die Rolle der logistischen Unterstüzungsanalyse wird betrachtet. Der Artikel stellt das Konzept von CALS (Continuous Acquisition and Life cycle Support) dar, welches als Umgebung, die Datenverteilung zwischen den in den Entwicklungsprozess beteiligten Geschäftspartnern ermöglicht.
Resumo:
Ein auf Basis von Prozessdaten kalibriertes Viskositätsmodell wird vorgeschlagen und zur Vorhersage der Viskosität einer Polyamid 12 (PA12) Kunststoffschmelze als Funktion von Zeit, Temperatur und Schergeschwindigkeit angewandt. Im ersten Schritt wurde das Viskositätsmodell aus experimentellen Daten abgeleitet. Es beruht hauptsächlich auf dem drei-parametrigen Ansatz von Carreau, wobei zwei zusätzliche Verschiebungsfaktoren eingesetzt werden. Die Temperaturabhängigkeit der Viskosität wird mithilfe des Verschiebungsfaktors aT von Arrhenius berücksichtigt. Ein weiterer Verschiebungsfaktor aSC (Structural Change) wird eingeführt, der die Strukturänderung von PA12 als Folge der Prozessbedingungen beim Lasersintern beschreibt. Beobachtet wurde die Strukturänderung in Form einer signifikanten Viskositätserhöhung. Es wurde geschlussfolgert, dass diese Viskositätserhöhung auf einen Molmassenaufbau zurückzuführen ist und als Nachkondensation verstanden werden kann. Abhängig von den Zeit- und Temperaturbedingungen wurde festgestellt, dass die Viskosität als Folge des Molmassenaufbaus exponentiell gegen eine irreversible Grenze strebt. Die Geschwindigkeit dieser Nachkondensation ist zeit- und temperaturabhängig. Es wird angenommen, dass die Pulverbetttemperatur einen Molmassenaufbau verursacht und es damit zur Kettenverlängerung kommt. Dieser fortschreitende Prozess der zunehmenden Kettenlängen setzt molekulare Beweglichkeit herab und unterbindet die weitere Nachkondensation. Der Verschiebungsfaktor aSC drückt diese physikalisch-chemische Modellvorstellung aus und beinhaltet zwei zusätzliche Parameter. Der Parameter aSC,UL entspricht der oberen Viskositätsgrenze, wohingegen k0 die Strukturänderungsrate angibt. Es wurde weiterhin festgestellt, dass es folglich nützlich ist zwischen einer Fließaktivierungsenergie und einer Strukturänderungsaktivierungsenergie für die Berechnung von aT und aSC zu unterscheiden. Die Optimierung der Modellparameter erfolgte mithilfe eines genetischen Algorithmus. Zwischen berechneten und gemessenen Viskositäten wurde eine gute Übereinstimmung gefunden, so dass das Viskositätsmodell in der Lage ist die Viskosität einer PA12 Kunststoffschmelze als Folge eines kombinierten Lasersinter Zeit- und Temperatureinflusses vorherzusagen. Das Modell wurde im zweiten Schritt angewandt, um die Viskosität während des Lasersinter-Prozesses in Abhängigkeit von der Energiedichte zu berechnen. Hierzu wurden Prozessdaten, wie Schmelzetemperatur und Belichtungszeit benutzt, die mithilfe einer High-Speed Thermografiekamera on-line gemessen wurden. Abschließend wurde der Einfluss der Strukturänderung auf das Viskositätsniveau im Prozess aufgezeigt.
Resumo:
This paper provides an insight to the development of a process model for the essential expansion of the automatic miniload warehouse. The model is based on the literature research and covers four phases of a warehouse expansion: the preparatory phase, the current state analysis, the design phase and the decision making phase. In addition to the literature research, the presented model is based on a reliable data set and can be applicable with a reasonable effort to ensure the informed decision on the warehouse layout. The model is addressed to users who are usually employees of logistics department, and is oriented on the improvement of the daily business organization combined with the warehouse expansion planning.
Resumo:
Energy efficiency has become an important research topic in intralogistics. Especially in this field the focus is placed on automated storage and retrieval systems (AS/RS) utilizing stacker cranes as these systems are widespread and consume a significant portion of the total energy demand of intralogistical systems. Numerical simulation models were developed to calculate the energy demand rather precisely for discrete single and dual command cycles. Unfortunately these simulation models are not suitable to perform fast calculations to determine a mean energy demand value of a complete storage aisle. For this purpose analytical approaches would be more convenient but until now analytical approaches only deliver results for certain configurations. In particular, for commonly used stacker cranes equipped with an intermediate circuit connection within their drive configuration there is no analytical approach available to calculate the mean energy demand. This article should address this research gap and present a calculation approach which enables planners to quickly calculate the energy demand of these systems.
Resumo:
27-Channel EEG potential map series were recorded from 12 normals with closed and open eyes. Intracerebral dipole model source locations in the frequency domain were computed. Eye opening (visual input) caused centralization (convergence and elevation) of the source locations of the seven frequency bands, indicative of generalized activity; especially, there was clear anteriorization of α-2 (10.5–12 Hz) and β-2 (18.5–21 Hz) sources (α-2 also to the left). Complexity of the map series' trajectories in state space (assessed by Global Dimensional Complexity and Global OMEGA Complexity) increased significantly with eye opening, indicative of more independent, parallel, active processes. Contrary to PET and fMRI, these results suggest that brain activity is more distributed and independent during visual input than after eye closing (when it is more localized and more posterior).
Resumo:
Aims: We examined what type of STEMI patients are more likely to undergo multivessel PCI (MPCI) in a "real-world" setting and whether MPCI leads to worse or better outcomes compared with single-vessel PCI (SPCI) after stratifying patients by risk. Methods and results: Among STEMI patients enrolled in the Swiss AMIS Plus registry between 2005 and 2012 (n=12,000), 4,941 were identified with multivessel disease. We then stratified patients based on MPCI use and their risk. High-risk patients were identified a priori as those with: 1) left main (LM) involvement (lesions, n=263); 2) out-of-hospital cardiac arrest; or 3) Killip class III/IV. Logistic regression models examined for predictors of MPCI use and the association between MPCI and in-hospital mortality. Three thousand eight hundred and thirty-three (77.6%) patients underwent SPCI and 1,108 (22.4%) underwent MPCI. Rates of MPCI were greater among high-risk patients for each of the three categories: 8.6% vs. 5.9% for out-of-hospital cardiac arrest (p<0.01); 12.3% vs. 6.2% for Killip III/IV (p<0.001); and 14.5% vs. 2.7% for LM involvement (p<0.001). Overall, in-hospital mortality after MPCI was higher when compared with SPCI (7.3% vs. 4.4%; p<0.001). However, this result was not present when patients were stratified by risk: in-hospital mortality for MPCI vs. SPCI was 2.0% vs. 2.0% (p=1.00) in low-risk patients and 22.2% vs. 21.7% (p=1.00) in high-risk patients. Conclusions: High-risk patients are more likely to undergo MPCI. Furthermore, MPCI does not appear to be associated with higher mortality after stratifying patients based on their risk.
Resumo:
Stemmatology, or the reconstruction of the transmission history of texts, is a field that stands particularly to gain from digital methods. Many scholars already take stemmatic approaches that rely heavily on computational analysis of the collated text (e.g. Robinson and O’Hara 1996; Salemans 2000; Heikkilä 2005; Windram et al. 2008 among many others). Although there is great value in computationally assisted stemmatology, providing as it does a reproducible result and allowing access to the relevant methodological process in related fields such as evolutionary biology, computational stemmatics is not without its critics. The current state-of-the-art effectively forces scholars to choose between a preconceived judgment of the significance of textual differences (the Lachmannian or neo-Lachmannian approach, and the weighted phylogenetic approach) or to make no judgment at all (the unweighted phylogenetic approach). Some basis for judgment of the significance of variation is sorely needed for medieval text criticism in particular. By this, we mean that there is a need for a statistical empirical profile of the text-genealogical significance of the different sorts of variation in different sorts of medieval texts. The rules that apply to copies of Greek and Latin classics may not apply to copies of medieval Dutch story collections; the practices of copying authoritative texts such as the Bible will most likely have been different from the practices of copying the Lives of local saints and other commonly adapted texts. It is nevertheless imperative that we have a consistent, flexible, and analytically tractable model for capturing these phenomena of transmission. In this article, we present a computational model that captures most of the phenomena of text variation, and a method for analysis of one or more stemma hypotheses against the variation model. We apply this method to three ‘artificial traditions’ (i.e. texts copied under laboratory conditions by scholars to study the properties of text variation) and four genuine medieval traditions whose transmission history is known or deduced in varying degrees. Although our findings are necessarily limited by the small number of texts at our disposal, we demonstrate here some of the wide variety of calculations that can be made using our model. Certain of our results call sharply into question the utility of excluding ‘trivial’ variation such as orthographic and spelling changes from stemmatic analysis.
Resumo:
AIMS To investigate a pressure-controlled intermittent coronary sinus occlusion (PICSO) system in an ischaemia/reperfusion model. METHODS AND RESULTS We randomly assigned 18 pigs subjected to 60 minutes ischaemia by left anterior descending (LAD) coronary artery balloon occlusion to PICSO (n=12, groups A and B) or to controls (n=6, group C). PICSO started 10 minutes before (group A), or 10 minutes after (group B) reperfusion and was maintained for 180 minutes. A continuous drop of distal LAD pressure was observed in group C. At 180 minutes of reperfusion, LAD diastolic pressure was significantly lower in group C compared to groups A and B (p=0.02). LAD mean pressure was significantly less than the systemic arterial mean pressure in group C (p=0.02), and the diastolic flow slope was flat, compared to groups A and B (p=0.03). IgG and IgM antibody deposition was significantly higher in ischaemic compared to non-ischaemic tissue in group C (p<0.05). Significantly more haemorrhagic lesions were seen in the ischaemic myocardium of group C, compared to groups A and B (p=0.002). The necrotic area differed non-significantly among groups. CONCLUSIONS PICSO was safe and effective in improving coronary perfusion pressure and reducing antibody deposition consistent with reduced microvascular obstruction and ischaemia/reperfusion injury.
Resumo:
Digital analysis of the occlusal contacts can be performed with the T-scan device (T Scan III, TekScan, Boston, USA). However, the thickness of the interocclusal T-scan sheet (100 μm) may lead to a displacement of the mandible. Thus, the aim of this study was to investigate the impact of the T-scan sheet on the position of the mandibular condyles in maximum intercuspidation. Twenty dentate subjects with healthy jaw function were enrolled in the study. An ultrasonic axiography device was used to measure the position of the condyles. Ten 3D condyle positions in maximum intercuspidation of the teeth were recorded: first the reference position without the sheet, then 3 times without the sheet, 3 times with the sheet, and finally again 3 times without the sheet. There was a statistically significant difference (Wilcoxon matched pairs test) between the condyle positions with and without the interocclusally positioned T-scan sheet (P < 0.0005). The T-scan device lead to a displacement of the condyles of about 1 mm mainly in ventral direction (P = 0.005). Thus, occlusal analysis is not performed in physiological, maximum intercuspidation. This has to be considered when interpreting the measured contact points.
Resumo:
A model of Drosophila circadian rhythm generation was developed to represent feedback loops based on transcriptional regulation of per, Clk (dclock), Pdp-1, and vri (vrille). The model postulates that histone acetylation kinetics make transcriptional activation a nonlinear function of [CLK]. Such a nonlinearity is essential to simulate robust circadian oscillations of transcription in our model and in previous models. Simulations suggest that two positive feedback loops involving Clk are not essential for oscillations, because oscillations of [PER] were preserved when Clk, vri, or Pdp-1 expression was fixed. However, eliminating positive feedback by fixing vri expression altered the oscillation period. Eliminating the negative feedback loop in which PER represses per expression abolished oscillations. Simulations of per or Clk null mutations, of per overexpression, and of vri, Clk, or Pdp-1 heterozygous null mutations altered model behavior in ways similar to experimental data. The model simulated a photic phase-response curve resembling experimental curves, and oscillations entrained to simulated light-dark cycles. Temperature compensation of oscillation period could be simulated if temperature elevation slowed PER nuclear entry or PER phosphorylation. The model makes experimental predictions, some of which could be tested in transgenic Drosophila.