297 resultados para Python


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Les langages de programmation typés dynamiquement tels que JavaScript et Python repoussent la vérification de typage jusqu’au moment de l’exécution. Afin d’optimiser la performance de ces langages, les implémentations de machines virtuelles pour langages dynamiques doivent tenter d’éliminer les tests de typage dynamiques redondants. Cela se fait habituellement en utilisant une analyse d’inférence de types. Cependant, les analyses de ce genre sont souvent coûteuses et impliquent des compromis entre le temps de compilation et la précision des résultats obtenus. Ceci a conduit à la conception d’architectures de VM de plus en plus complexes. Nous proposons le versionnement paresseux de blocs de base, une technique de compilation à la volée simple qui élimine efficacement les tests de typage dynamiques redondants sur les chemins d’exécution critiques. Cette nouvelle approche génère paresseusement des versions spécialisées des blocs de base tout en propageant de l’information de typage contextualisée. Notre technique ne nécessite pas l’utilisation d’analyses de programme coûteuses, n’est pas contrainte par les limitations de précision des analyses d’inférence de types traditionnelles et évite la complexité des techniques d’optimisation spéculatives. Trois extensions sont apportées au versionnement de blocs de base afin de lui donner des capacités d’optimisation interprocédurale. Une première extension lui donne la possibilité de joindre des informations de typage aux propriétés des objets et aux variables globales. Puis, la spécialisation de points d’entrée lui permet de passer de l’information de typage des fonctions appellantes aux fonctions appellées. Finalement, la spécialisation des continuations d’appels permet de transmettre le type des valeurs de retour des fonctions appellées aux appellants sans coût dynamique. Nous démontrons empiriquement que ces extensions permettent au versionnement de blocs de base d’éliminer plus de tests de typage dynamiques que toute analyse d’inférence de typage statique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

adesp. fr. 700 Kn.-Sn. podría proceder de Andrómeda de Eurípides y no de una Níobe. Los siguientes elementos de los frs. (a) y (b) pueden responder a lo que se conoce de la tragedia de Eurípides: (a) la semejanza con una estatua; (b) la novia de Hades; (c) el silencio del personaje; (d) la colaboración con las Moiras; (e) el contraste entre la fortuna regia y la desgracia y el sufrimiento de los padres. No es, por tanto, necesario modificar el texto recibido para eliminar μάγους πάγας y la referencia a las trampas mágicas en el v. 5, que cuadra bien con Medusa y Perseo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este artículo analiza la existencia de un posible ritual en la antigua Macedonia que fuese similar al Akîtu babilonio. Pensamos que la victoria de Zeus sobre sus enemigos en la gigantomaquia y tifonomaquia fue celebrada por los reyes macedonios para fortalecer su posición en el poder. Aunque no hay evidencias directas de ello, tenemos una serie de diferentes fuentes en las cuales los macedonios están relacionados directa o indirectamente con Tifón y los gigantes. Además, la estratégica importancia del monte Olimpo para los macedonios nos hace pensar que la victoria de Zeus sobre las fuerzas del mal fue conmemorada de alguna forma por ellos.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper examines the integration of a tolerance design process within the Computer-Aided Design (CAD) environment having identified the potential to create an intelligent Digital Mock-Up [1]. The tolerancing process is complex in nature and as such reliance on Computer-Aided Tolerancing (CAT) software and domain experts can create a disconnect between the design and manufacturing disciplines It is necessary to implement the tolerance design procedure at the earliest opportunity to integrate both disciplines and to reduce workload in tolerance analysis and allocation at critical stages in product development when production is imminent.
The work seeks to develop a methodology that will allow for a preliminary tolerance allocation procedure within CAD. An approach to tolerance allocation based on sensitivity analysis is implemented on a simple assembly to review its contribution to an intelligent DMU. The procedure is developed using Python scripting for CATIA V5, with analysis results aligning with those in literature. A review of its implementation and requirements is presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

During this thesis work a coupled thermo-mechanical finite element model (FEM) was builtto simulate hot rolling in the blooming mill at Sandvik Materials Technology (SMT) inSandviken. The blooming mill is the first in a long line of processes that continuously or ingotcast ingots are subjected to before becoming finished products. The aim of this thesis work was twofold. The first was to create a parameterized finiteelement (FE) model of the blooming mill. The commercial FE software package MSCMarc/Mentat was used to create this model and the programing language Python was used toparameterize it. Second, two different pass schedules (A and B) were studied and comparedusing the model. The two pass series were evaluated with focus on their ability to healcentreline porosity, i.e. to close voids in the centre of the ingot. This evaluation was made by studying the hydrostatic stress (σm), the von Mises stress (σeq)and the plastic strain (εp) in the centre of the ingot. From these parameters the stress triaxiality(Tx) and the hydrostatic integration parameter (Gm) were calculated for each pass in bothseries using two different transportation times (30 and 150 s) from the furnace. The relationbetween Gm and an analytical parameter (Δ) was also studied. This parameter is the ratiobetween the mean height of the ingot and the contact length between the rolls and the ingot,which is useful as a rule of thumb to determine the homogeneity or penetration of strain for aspecific pass. The pass series designed with fewer passes (B), many with greater reduction, was shown toachieve better void closure theoretically. It was also shown that a temperature gradient, whichis the result of a longer holding time between the furnace and the blooming mill leads toimproved void closure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

During this thesis work a coupled thermo-mechanical finite element model (FEM) was builtto simulate hot rolling in the blooming mill at Sandvik Materials Technology (SMT) inSandviken. The blooming mill is the first in a long line of processes that continuously or ingotcast ingots are subjected to before becoming finished products. The aim of this thesis work was twofold. The first was to create a parameterized finiteelement (FE) model of the blooming mill. The commercial FE software package MSCMarc/Mentat was used to create this model and the programing language Python was used toparameterize it. Second, two different pass schedules (A and B) were studied and comparedusing the model. The two pass series were evaluated with focus on their ability to healcentreline porosity, i.e. to close voids in the centre of the ingot. This evaluation was made by studying the hydrostatic stress (σm), the von Mises stress (σeq)and the plastic strain (εp) in the centre of the ingot. From these parameters the stress triaxiality(Tx) and the hydrostatic integration parameter (Gm) were calculated for each pass in bothseries using two different transportation times (30 and 150 s) from the furnace. The relationbetween Gm and an analytical parameter (Δ) was also studied. This parameter is the ratiobetween the mean height of the ingot and the contact length between the rolls and the ingot,which is useful as a rule of thumb to determine the homogeneity or penetration of strain for aspecific pass. The pass series designed with fewer passes (B), many with greater reduction, was shown toachieve better void closure theoretically. It was also shown that a temperature gradient, whichis the result of a longer holding time between the furnace and the blooming mill leads toimproved void closure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lappeenrannan teknillinen yliopisto tutkii pientasajännitesähkön käyttöä. Yliopisto on rakennuttanut Järvi-Suomen Energia Oy:n ja Suur-Savon Sähkö Oy:n kanssa yhteistyössä kokeellisen pientasajännitesähköverkon, jolla pystytään tarjoamaan kenttäolosuhteet pienjännitetutkimukselle todellisilla asiakkailla ja todentaa LVDC-teknologiaa ja muita älykkään sähköverkon toimintoja kenttäolosuhteissa. Verkon tasajänniteyhteys on rakennettu 20 kV sähkönjakeluverkon ja neljän kuluttajan välille. 20 kV keskijännite suunnataan tasamuuntamolla ±750 V pientasajännitteeksi ja uudestaan 400/230 V vaihtojännitteeksi kuluttajien läheisyydessä. Tämän kandidaatintyön tarkoituksena on luoda yliopistolle tietokanta pientasajännitesähköverkosta kertyvälle tiedolle ja mittaustuloksille. Tietokanta nähtiin tarpeelliseksi luoda, jotta pienjänniteverkon mittaustuloksia pystytään myöhemmin tarkastelemaan yhdessä ja yhtenäisessä muodossa. Yhdeksi tutkimuskysymykseksi muodostui, kuinka järjestää ja visualisoida kaikki verkosta palvelimille kertyvä mittausdata. Työssä on huomioitu myös kolme tietokantaa mahdollisesti hyödyntävää käyttäjäryhmää: kotitalousasiakkaat, sähköverkkoyhtiöt ja tutkimuslaboratorio, sekä pohdittu tietokannan hyötyä ja merkitystä näille käyttäjille. Toiseksi tutkimuskysymykseksi muodostuikin, mikä kaikesta tietokantaan talletetusta datasta olisi oleellisen tärkeää ottaa talteen näiden asiakkaiden kannalta, ja kuinka nämä voisivat hakea tietoa tietokannasta. Työn tutkimusmenetelmät perustuvat jo valmiiksi olemassa olevaan mittausdataan. Työtä varten on käytetty sekä painettua että sähköisessä muodossa olevaa kirjallisuutta. Työn tuloksena on saatu luotua tietokanta MySQL Workbench -ohjelmistolla, sekä mittausdatan keräys- ja käsittelyohjelmat Python-ohjelmointikielellä. Lisäksi on luotu erillinen MATLAB-rajapinta tiedon visualisoimista varten, jolla havainnollistetaan kolmen asiakasryhmän mittausdataa. Tietokanta ja sen tiedon visualisointi antavat kuluttajalle mahdollisuuden ymmärtää paremmin omaa sähkönkäyttöään, sekä sähköverkkoyhtiöille ja tutkimuslaboratorioille muun muassa tietoa sähkön laadusta ja verkon kuormituksesta.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study provides the first spatially detailed and complete inventory of Ambrosia pollen sources in Italy – the third largest centre of ragweed in Europe. The inventory relies on a well tested top-down approach that combines local knowledge, detailed land cover, pollen observations and a digital elevation model that assumes permanent ragweed populations mainly grow below 745m. The pollen data were obtained from 92 volumetric pollen traps located throughout Italy during 2004-2013. Land cover is derived from Corine Land cover information with 100m resolution. The digital elevation model is based on the NASA shuttle radar mission with 90m resolution. The inventory is produced using a combination of ArcGIS and Python for automation and validated using cross-correlation and has a final resolution of 5km x 5km. The method includes a harmonization of the inventory with other European inventories for the Pannonian Plain, France and Austria in order to provide a coherent picture of all major ragweed sources. The results show that the mean annual pollen index varies from 0 in South Italy to 6779 in the Po Valley. The results also show that very large pollen indexes are observed in the Milan region, but this region has smaller amounts of ragweed habitats compared to other parts of the Po Valley and known ragweed areas in France and the Pannonian Plain. A significant decrease in Ambrosia pollen concentrations was recorded in 2013 by pollen monitoring stations located in the Po Valley, particularly in the Northwest of Milan. This was the same year as the appearance of the Ophraella communa leaf beetle in Northern Italy. These results suggest that ragweed habitats near to the Milan region have very high densities of Ambrosia plants compared to other known ragweed habitats in Europe. The Milan region therefore appears to contain habitats with the largest ragweed infestation in Europe, but a smaller amount of habitats is a likely cause the pollen index to be lower compared to central parts of the Pannonian Plain. A low number of densely packed habitats may have increased the impact of the Ophraella beetle and might account for the documented decrease in airborne Ambrosia pollen levels, an event that cannot be explained by meteorology alone. Further investigations that model atmospheric pollen before and after the appearance of the beetle in this part of Northern Italy are needed to assess the influence of the beetle on airborne Ambrosia pollen concentrations. Future work will focus on short distance transport episodes for stations located in the Po Valley, and long distance transport events for stations in Central Italy that exhibit peaks in daily airborne Ambrosia pollen levels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Conventional taught learning practices often experience difficulties in keeping students motivated and engaged. Video games, however, are very successful at sustaining high levels of motivation and engagement through a set of tasks for hours without apparent loss of focus. In addition, gamers solve complex problems within a gaming environment without feeling fatigue or frustration, as they would typically do with a comparable learning task. Based on this notion, the academic community is keen on exploring methods that can deliver deep learner engagement and has shown increased interest in adopting gamification – the integration of gaming elements, mechanics, and frameworks into non-game situations and scenarios – as a means to increase student engagement and improve information retention. Its effectiveness when applied to education has been debatable though, as attempts have generally been restricted to one-dimensional approaches such as transposing a trivial reward system onto existing teaching materials and/or assessments. Nevertheless, a gamified, multi-dimensional, problem-based learning approach can yield improved results even when applied to a very complex and traditionally dry task like the teaching of computer programming, as shown in this paper. The presented quasi-experimental study used a combination of instructor feedback, real time sequence of scored quizzes, and live coding to deliver a fully interactive learning experience. More specifically, the “Kahoot!” Classroom Response System (CRS), the classroom version of the TV game show “Who Wants To Be A Millionaire?”, and Codecademy’s interactive platform formed the basis for a learning model which was applied to an entry-level Python programming course. Students were thus allowed to experience multiple interlocking methods similar to those commonly found in a top quality game experience. To assess gamification’s impact on learning, empirical data from the gamified group were compared to those from a control group who was taught through a traditional learning approach, similar to the one which had been used during previous cohorts. Despite this being a relatively small-scale study, the results and findings for a number of key metrics, including attendance, downloading of course material, and final grades, were encouraging and proved that the gamified approach was motivating and enriching for both students and instructors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Atualmente, sensores remotos e computadores de alto desempenho estão sendo utilizados como instrumentos principais na coleta e produção de dados oceanográficos. De posse destes dados, é possível realizar estudos que permitem simular e prever o comportamento do oceano por meio de modelos numéricos regionais. Dentre os fatores importantes no estudo da oceanografia, podem ser destacados àqueles referentes aos impactos ambientais, de contaminação antrópica, utilização de energias renováveis, operações portuárias e etc. Contudo, devido ao grande volume de dados gerados por instituições ambientais, na forma de resultados de modelos globais como o HYCOM (Hybrid Coordinate Ocean Model) e dos programas de Reanalysis da NOAA (National Oceanic and Atmospheric Administration), torna-se necessária a criação de rotinas computacionais para realizar o tratamento de condições iniciais e de contorno, de modo que possam ser aplicadas a modelos regionais como o TELEMAC3D (www.opentelemac.org). Problemas relacionados a baixa resolução, ausência de dados e a necessidade de interpolação para diferentes malhas ou sistemas de coordenadas verticais, tornam necessária a criação de um mecanismo computacional que realize este tratamento adequadamente. Com isto, foram desenvolvidas rotinas na linguagem de programação Python, empregando interpoladores de vizinho mais próximo, de modo que, a partir de dados brutos dos modelos HYCOM e do programa de Reanalysis da NOAA, foram preparadas condições iniciais e de contorno para a realização de uma simulação numérica teste. Estes resultados foram confrontados com outro resultado numérico onde, as condições foram construídas a partir de um método de interpolação mais sofisticado, escrita em outra linguagem, e que já vem sendo utilizada no laboratório. A análise dos resultados permitiu concluir que, a rotina desenvolvida no âmbito deste trabalho, funciona adequadamente para a geração de condições iniciais e de contorno do modelo TELEMAC3D. Entretanto, um interpolador mais sofisticado deve ser desenvolvido de forma a aumentar a qualidade nas interpolações, otimizar o custo computacional, e produzir condições que sejam mais realísticas para a utilização do modelo TELEMAC3D.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Les langages de programmation typés dynamiquement tels que JavaScript et Python repoussent la vérification de typage jusqu’au moment de l’exécution. Afin d’optimiser la performance de ces langages, les implémentations de machines virtuelles pour langages dynamiques doivent tenter d’éliminer les tests de typage dynamiques redondants. Cela se fait habituellement en utilisant une analyse d’inférence de types. Cependant, les analyses de ce genre sont souvent coûteuses et impliquent des compromis entre le temps de compilation et la précision des résultats obtenus. Ceci a conduit à la conception d’architectures de VM de plus en plus complexes. Nous proposons le versionnement paresseux de blocs de base, une technique de compilation à la volée simple qui élimine efficacement les tests de typage dynamiques redondants sur les chemins d’exécution critiques. Cette nouvelle approche génère paresseusement des versions spécialisées des blocs de base tout en propageant de l’information de typage contextualisée. Notre technique ne nécessite pas l’utilisation d’analyses de programme coûteuses, n’est pas contrainte par les limitations de précision des analyses d’inférence de types traditionnelles et évite la complexité des techniques d’optimisation spéculatives. Trois extensions sont apportées au versionnement de blocs de base afin de lui donner des capacités d’optimisation interprocédurale. Une première extension lui donne la possibilité de joindre des informations de typage aux propriétés des objets et aux variables globales. Puis, la spécialisation de points d’entrée lui permet de passer de l’information de typage des fonctions appellantes aux fonctions appellées. Finalement, la spécialisation des continuations d’appels permet de transmettre le type des valeurs de retour des fonctions appellées aux appellants sans coût dynamique. Nous démontrons empiriquement que ces extensions permettent au versionnement de blocs de base d’éliminer plus de tests de typage dynamiques que toute analyse d’inférence de typage statique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El objetivo del TFG es ejectuar y documentar el proceso de actualizaci on de un sistema software real de car acter empresarial, perteneciente a la empresa dedicada a las transacciones de divisas Foreign Exchange Solutions SL. El sistema est a implementado en Python 2.7 usando el framework de desarrollo r apido de aplicaciones web Django que, comenzando por su versi on 1.3.1, terminar a al nal del proyecto en la versi on 1.4.10, lo que nos llevar a a tener que actualizar todas las librer as relacionadas, adem as de mejorar la calidad del c odigo e incluso cambiar la estructura del proyecto, prestando adem as especial atenci on a la pruebas unitarias y de regresi on para comprobar el correcto funcionamiento del sistema a lo largo del desarrollo. Todo esto con el n de conseguir las nuevas funcionalidades y caracter sticas que una versi on m as nueva nos ofrece, adem as de mejorar la calidad de la aplicaci on -aumentar la reutilizaci on del c odigo y reducir futuros errores gracias a un c odigo m as sencillo y legible-, aumentar el rendimiento, y obtener una buena cobertura de pruebas. Usaremos adem as la metodolog a agil Scrum, el SGBD PostgreSQL, adem as de otras herramientas como Solr, ElasticSearch, Redis, Celery o Mercurial para el control de versiones.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dataloggerit ovat tärkeitä mittaustekniikassa käytettäviä mittalaitteita, joiden tarkoituksena on kerätä talteen mittausdataa pitkiltä aikaväleiltä. Dataloggereita voidaan käyttää esimerkiksi teollista prosessia osana olevien toimilaitteiden tai kotitalouden energiajärjestelmän seurannassa. Teollisen luokan dataloggerit ovat yleensä hinnaltaan satojen tai tuhansien eurojen luokkaa. Työssä pyrittiin löytämään teollisen luokan laitteille halpa ja helppokäyttöinen vaihtoehto, joka on kuitenkin riittävän tehokas ja toimiva. Työssä suunniteltiin ja toteutettiin dataloggeri Raspberry Pi-alustalle ja testattiin sitä oikeaa teollista ympäristöä vastaavissa olosuhteissa. Kirjallisuudesta ja internet artikkeleista etsittiin samankaltaisia laite- ja ohjelmistoratkaisuja ja niitä käytettiin dataloggausjärjestelmän pohjana. Raspberry Pi-alustalle koodattiin yksinkertainen Python-kielinen data-loggausohjelma, joka käyttää Modbus-tiedonsiirtoprotokollaa. Testien perusteella voidaan todeta, että toteutettu dataloggeri on toimiva ja kykenee kaupallisten dataloggereiden tasoiseen mittaukseen ainakin pienillä näytteistystaajuuksilla. Toteutettu dataloggeri on myös huomattavasti kaupallisia dataloggereita halvempi. Helppokäyttöisyyden näkökulmasta dataloggerissa havaittiin puutteita, joita käydään läpi jatkokehitysideoiden muodossa.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El proyecto consiste en un portal de búsqueda de vulnerabilidades web, llamado Krashr, cuyo objetivo es el de buscar si una página web introducida por un usuario contiene algún tipo de vulnerabilidad explotable, además de tratar de ayudar a este usuario a arreglar las vulnerabilidades encontradas. Se cuenta con un back-end realizado en Python con una base de datos PostreSQL, un front-end web realizado en AngularJS y una API basada en Node.js y Express que comunica los dos frentes.