915 resultados para Computer Programs


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Design rights represent an interesting example of how the EU legislature has successfully regulated an otherwise heterogeneous field of law. Yet this type of protection is not for all. The tools created by EU intervention have been drafted paying much more attention to the industry sector rather than to designers themselves. In particular, modern, digitally based, individual or small-sized, 3D printing, open designers and their needs are largely neglected by such legislation. There is obviously nothing wrong in drafting legal tools around the needs of an industrial sector with an important role in the EU economy, on the contrary, this is a legitimate and good decision of industrial policy. However, good legislation should be fair, balanced, and (technologically) neutral in order to offer suitable solutions to all the players in the market, and all the citizens in the society, without discriminating the smallest or the newest: the cost would be to stifle innovation. The use of printing machinery to manufacture physical objects created digitally thanks to computer programs such as Computer-Aided Design (CAD) software has been in place for quite a few years, and it is actually the standard in many industrial fields, from aeronautics to home furniture. The change in recent years that has the potential to be a paradigm-shifting factor is a combination between the opularization of such technologies (price, size, usability, quality) and the diffusion of a culture based on access to and reuse of knowledge. We will call this blend Open Design. It is probably still too early, however, to say whether 3D printing will be used in the future to refer to a major event in human history, or instead will be relegated to a lonely Wikipedia entry similarly to ³Betamax² (copyright scholars are familiar with it for other reasons). It is not too early, however, to develop a legal analysis that will hopefully contribute to clarifying the major issues found in current EU design law structure, why many modern open designers will probably find better protection in copyright, and whether they can successfully rely on open licenses to achieve their goals. With regard to the latter point, we will use Creative Commons (CC) licenses to test our hypothesis due to their unique characteristic to be modular, i.e. to have different license elements (clauses) that licensors can choose in order to adapt the license to their own needs.”

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The purpose of the article is to provide first a doctrinal summary of the concept, rules and policy of exhaustion, first, on the international and EU level, and, later, under the law of the United States. Based upon this introduction, the paper turns to the analysis of the doctrine by the pioneer court decisions handed over in the UsedSoft, ReDigi, the German e-book/audio book cases, and the pending Tom Kabinet case from the Netherlands. Questions related to the licence versus sale dichotomy; the so-called umbrella solution; the “new copy theory”, migration of digital copies via the internet; the forward-and-delete technology; the issue of lex specialis and the theory of functional equivalence are covered later on. The author of the present article stresses that the answers given by the respective judges of the referred cases are not the final stop in the discussion. The UsedSoft preliminary ruling and the subsequent German domestic decisions highlight a special treatment for computer programs. On the other hand, the refusal of digital exhaustion in the ReDigi and the audio book/e-book cases might be in accordance with the present wording of copyright law; however, they do not necessarily reflect the proper trends of our ages. The paper takes the position that the need for digital exhaustion is constantly growing in society and amongst businesses. Indeed, there are reasonable arguments in favour of equalizing the resale of works sold in tangible and intangible format. Consequently, the paper urges the reconsideration of the norms on exhaustion on the international and EU level.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Individuals with intellectual disabilities (ID) often struggle with learning how to read. Reading difficulties seem to be the most common secondary condition of ID. Only one in five children with mild or moderate ID achieves even minimal literacy skills. However, literacy education for children and adolescents with ID has been largely overlooked by researchers and educators. While there is little research on reading of children with ID, many training studies have been conducted with other populations with reading difficulties. The most common approach of acquiring literacy skills consists of sophisticated programs that train phonological skills and auditory perception. Only few studies investigated the influence of implicit learning on literacy skills. Implicit learning processes seem to be largely independent of age and IQ. Children are sensitive to the statistics of their learning environment. By frequent word reading they acquire implicit knowledge about the frequency of single letters and letter patterns in written words. Additionally, semantic connections not only improve the word understanding, but also facilitate storage of words in memory. Advances in communication technology have introduced new possibilities for remediating literacy skills. Computers can provide training material in attractive ways, for example through animations and immediate feedback .These opportunities can scaffold and support attention processes central to learning. Thus, the aim of this intervention study was to develop and implement a computer based word-picture training, which is based on statistical and semantic learning, and to examine the training effects on reading, spelling and attention in children and adolescents (9-16 years) diagnosed with mental retardation (general IQ  74). Fifty children participated in four to five weekly training sessions of 15-20 minutes over 4 weeks, and completed assessments of attention, reading, spelling, short-term memory and fluid intelligence before and after training. After a first assessment (T1), the entire sample was divided in a training group (group A) and a waiting control group (group B). After 4 weeks of training with group A, a second assessment (T2) was administered with both training groups. Afterwards, group B was trained for 4 weeks, before a last assessment (T3) was carried out in both groups. Overall, the results showed that the word-picture training led to substantial gains on word decoding and attention for both training groups. These effects were preserved six weeks later (group A). There was also a clear tendency of improvement in spelling after training for both groups, although the effect did not reach significance. These findings highlight the fact that an implicit statistical learning training in a playful way by motivating computer programs can not only promote reading development, but also attention in children with intellectual disabilities.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Phase I clinical trial is mainly designed to determine the maximum tolerated dose (MTD) of a new drug. Optimization of phase I trial design is crucial to minimize the number of enrolled patients exposed to unsafe dose levels and to provide reliable information to the later phases of clinical trials. Although it has been criticized about its inefficient MTD estimation, nowadays the traditional 3+3 method remains dominant in practice due to its simplicity and conservative estimation. There are many new designs that have been proven to generate more credible MTD estimation, such as the Continual Reassessment Method (CRM). Despite its accepted better performance, the CRM design is still not widely used in real trials. There are several factors that contribute to the difficulties of CRM adaption in practice. First, CRM is not widely accepted by the regulatory agencies such as FDA in terms of safety. It is considered to be less conservative and tend to expose more patients above the MTD level than the traditional design. Second, CRM is relatively complex and not intuitive for the clinicians to fully understand. Third, the CRM method take much more time and need statistical experts and computer programs throughout the trial. The current situation is that the clinicians still tend to follow the trial process that they are comfortable with. This situation is not likely to change in the near future. Based on this situation, we have the motivation to improve the accuracy of MTD selection while follow the procedure of the traditional design to maintain simplicity. We found that in 3+3 method, the dose transition and the MTD determination are relatively independent. Thus we proposed to separate the two stages. The dose transition rule remained the same as 3+3 method. After getting the toxicity information from the dose transition stage, we combined the isotonic transformation to ensure the monotonic increasing order before selecting the optimal MTD. To compare the operating characteristics of the proposed isotonic method and the other designs, we carried out 10,000 simulation trials under different dose setting scenarios to compare the design characteristics of the isotonic modified method with standard 3+3 method, CRM, biased coin design (BC) and k-in-a-row design (KIAW). The isotonic modified method improved MTD estimation of the standard 3+3 in 39 out of 40 scenarios. The improvement is much greater when the target is 0.3 other than 0.25. The modified design is also competitive when comparing with other selected methods. A CRM method performed better in general but was not as stable as the isotonic method throughout the different dose settings. The results demonstrated that our proposed isotonic modified method is not only easily conducted using the same procedure as 3+3 but also outperforms the conventional 3+3 design. It can also be applied to determine MTD for any given TTL. These features make the isotonic modified method of practical value in phase I clinical trials.^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

SIMBAA is a spatially explicit, individual-based simulation model. It was developed to analyse the response of populations of Antarctic benthic species and their diversity to iceberg scouring. This disturbance is causing a high local mortality providing potential space for new colonisation. Traits can be attributed to model species, e.g. in terms of reproduction, dispersal, and life span. Physical disturbances can be designed in space and time, e.g. in terms of size, shape, and frequency. Environmental heterogeneity can be considered by cell-specific capacities to host a certain number of individuals. When grid cells become empty (after a disturbance event or due to natural mortality of of an individual), a lottery decides which individual from which species stored in a pool of candidates (for this cell) will recruit in that cell. After a defined period the individuals become mature and their offspring are dispersed and stored in the pool of candidates. The biological parameters and disturbance regimes decide on how long an individual lives. Temporal development of single populations of species as well as Shannon diversity are depicted in the main window graphically and primary values are listed. Examples for simulations can be loaded and saved as sgf-files. The results are also shown in an additional window in a dimensionless area with 50 x 50 cells, which contain single individuals depicted as circles; their colour indicates the assignment to the self-designed model species and the size represents their age. Dominant species per cell and disturbed areas can also be depicted. Output of simulation runs can be saved as images, which can be assembled to video-clips by standard computer programs (see GIF-examples of which "Demo 1" represents the response of the Antarctic benthos to iceberg scouring and "Demo 2" represents a simulation of a deep-sea benthic habitat).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Los terremotos constituyen una de las más importantes fuentes productoras de cargas dinámicas que actúan sobre las estructuras y sus cimentaciones. Cuando se produce un terremoto la energía liberada genera movimientos del terreno en forma de ondas sísmicas que pueden provocar asientos en las cimentaciones de los edificios, empujes sobre los muros de contención, vuelco de las estructuras y el suelo puede licuar perdiendo su capacidad de soporte. Los efectos de los terremotos en estructuras constituyen unos de los aspectos que involucran por su condición de interacción sueloestructura, disciplinas diversas como el Análisis Estructural, la Mecánica de Suelo y la Ingeniería Sísmica. Uno de los aspectos que han sido poco estudiados en el cálculo de estructuras sometidas a la acciones de los terremotos son los efectos del comportamiento no lineal del suelo y de los movimientos que pueden producirse bajo la acción de cargas sísmicas, tales como posibles despegues y deslizamientos. En esta Tesis se estudian primero los empujes sísmicos y posibles deslizamientos de muros de contención y se comparan las predicciones de distintos tipos de cálculos: métodos pseudo-estáticos como el de Mononobe-Okabe (1929) con la contribución de Whitman-Liao (1985), y formulaciones analíticas como la desarrollada por Veletsos y Younan (1994). En segundo lugar se estudia el efecto del comportamiento no lineal del terreno en las rigideces de una losa de cimentación superficial y circular, como la correspondiente a la chimenea de una Central Térmica o al edificio del reactor de una Central Nuclear, considerando su variación con frecuencia y con el nivel de cargas. Finalmente se estudian los posibles deslizamientos y separación de las losas de estas dos estructuras bajo la acción de terremotos, siguiendo la formulación propuesta por Wolf (1988). Para estos estudios se han desarrollado una serie de programas específicos (MUROSIS, VELETSOS, INTESES y SEPARSE) cuyos listados y detalles se incluyen en los Apéndices. En el capítulo 6 se incluyen las conclusiones resultantes de estos estudios y recomendaciones para futuras investigaciones. ABSTRACT Earthquakes constitute one of the most important sources of dynamic loads that acting on structures and foundations. When an earthquake occurs the liberated energy generates seismic waves that can give rise to structural vibrations, settlements of the foundations of buildings, pressures on retaining walls, and possible sliding, uplifting or even overturning of structures. The soil can also liquefy losing its capacity of support The study of the effects of earthquakes on structures involve the use of diverse disciplines such as Structural Analysis, Soil Mechanics and Earthquake Engineering. Some aspects that have been the subject of limited research in relation to the behavior of structures subjected to earthquakes are the effects of nonlinear soil behavior and geometric nonlinearities such as sliding and uplifting of foundations. This Thesis starts with the study of the seismic pressures and potential displacements of retaining walls comparing the predictions of two types of formulations and assessing their range of applicability and limitations: pseudo-static methods as proposed by Mononobe-Okabe (1929), with the contribution of Whitman-Liao (1985), and analytical formulations as the one developed by Veletsos and Younan (1994) for rigid walls. The Thesis deals next with the effects of nonlinear soil behavior on the dynamic stiffness of circular mat foundations like the chimney of a Thermal Power Station or the reactor building of a Nuclear Power Plant, as a function of frequency and level of forces. Finally the seismic response of these two structures accounting for the potential sliding and uplifting of the foundation under a given earthquake are studied, following an approach suggested by Wolf (1988). In order to carry out these studies a number of special purposes computer programs were developed (MUROSIS, VELETSOS, INTESES and SEPARSE). The listing and details of these programs are included in the appendices. The conclusions derived from these studies and recommendations for future work are presented in Chapter 6.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

En el presente proyecto se realiza el diseño y cálculo de la instalación eléctrica y el cálculo de luminarias, de un edificio de oficinas y almacén de productos farmacéuticos. Este diseño se precisa para el correcto desarrollo de la actividad de la nave industrial objeto de la instalación. Mediante la utilización de programas informáticos se pretende diseñar una guía para agilizar los procesos de dimensionamiento y cálculo para este tipo de proyectos. Al contar estos programas con una normativa totalmente actualizada, también agilizan el proceso de adecuación a la norma. En el cálculo de luminarias se consigue un cálculo muy aproximado de la potencia requerida, además de asegurar las condiciones lumínicas necesarias. También se tiene un cálculo muy exacto del circuito eléctrico que es fácil modificar a futuras ampliaciones. ABSTRACT The project´s aim is to make the design and calculations of the electrical and lighting installations, for a pharmaceutical warehouse. This design is necessary to ensure proper operational activity of the industrial warehouse, subject to the installation. By means of computer programs, it is intended to design a guide in order to speed up the processes of calculations and sizing of the electric wiring for this type of project. These programs are also fully updated, and therefore, the processes of adaptation to the legislation and regulations are made easier. In the calculation of the lighting, the software achieves a close approximation of the required power as well as ensuring the necessary light conditions. With this software we also achieve a very accurate calculation of the electrical circuit that is easy to modify to future expansions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El objetivo de este proyecto de investigación es comparar dos técnicas matemáticas de aproximación polinómica, las aproximaciones según el criterio de mínimos cuadrados y las aproximaciones uniformes (“minimax”). Se describen tanto el mercado actual del cobre, con sus fluctuaciones a lo largo del tiempo, como los distintos modelos matemáticos y programas informáticos disponibles. Como herramienta informática se ha seleccionado Matlab®, cuya biblioteca matemática es muy amplia y de uso muy extendido y cuyo lenguaje de programación es suficientemente potente para desarrollar los programas que se necesiten. Se han obtenido diferentes polinomios de aproximación sobre una muestra (serie histórica) que recoge la variación del precio del cobre en los últimos años. Se ha analizado la serie histórica completa y dos tramos significativos de ella. Los resultados obtenidos incluyen valores de interés para otros proyectos. Abstract The aim of this research project is to compare two mathematical models for estimating polynomial approximation, the approximations according to the criterion of least squares approximations uniform (“Minimax”). Describes both the copper current market, fluctuating over time as different computer programs and mathematical models available. As a modeling tool is selected main Matlab® which math library is the largest and most widely used programming language and which is powerful enough to allow you to develop programs that are needed. We have obtained different approximating polynomials, applying mathematical methods chosen, a sample (historical series) which indicates the fluctuation in copper prices in last years. We analyzed the complete historical series and two significant sections of it. The results include values that we consider relevant to other projects

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El software es, cada vez más, una parte muy importante de cualquier circuito electrónico moderno, por ejemplo, un circuito realizado con algún tipo de microprocesador debe incorporar un programa de control por pequeño que sea. Al utilizarse programas informáticos en los circuitos electrónicos modernos, es muy aconsejable, por no decir imprescindible, realizar una serie de pruebas de calidad del diseño realizado. Estas pruebas son cada vez más complicadas de realizar debido al gran tamaño del software empleado en los sistemas actuales, por este motivo, es necesario estructurar una serie de pruebas con el fin de realizar un sistema de calidad, y en algunos casos, un sistema que no presente ningún peligro para el ser humano o el medio ambiente. Esta propuesta consta de la explicación de las técnicas de diseño de pruebas que existen actualmente (por lo menos las más básicas ya que es un tema muy extenso) para realizar el control de calidad del software que puede contener un sistema embebido. Además, muchos circuitos electrónicos, debido a su control o exigencia hardware, es imprescindible que sean manipulados por algún programa que requiera más que un simple microprocesador, me refiero a que se deban controlar por medio de un pequeño programa manipulado por un sistema operativo, ya sea Linux, AIX, Unix, Windows, etc., en este caso el control de calidad se debería llevar a cabo con otras técnicas de diseño. También se puede dar el caso que el circuito electrónico a controlar se deba hacer por medio de una página web. El objetivo es realizar un estudio de las actuales técnicas de diseño de pruebas que están orientadas al desarrollo de sistemas embebidos. ABSTRACT. Software is increasingly a very important part of any modern electronic circuit, for example, a circuit made with some type of microprocessor must incorporate a control program no matter the small it is. When computer programs are used in modern electronic circuits, it is quite advisable if not indispensable to perform a series of quality tests of the design. These tests are becoming more and more difficult to be performed due to the large size of the software used in current systems, which is why it is necessary to structure a series of tests in order to perform a quality system, and in some cases, a system with no danger to humans or to the environment. This proposal consists of an explanation of the techniques used in the tests (at least the most basic ones since it is a very large topic) for quality control of software which may contain an embedded system. In addition, a lot of electronic circuits, due to its control or required hardware, it is essential to be manipulated by a program that requires more than a simple microprocessor, I mean that they must be controlled by means of a small program handled by an operating system, being Linux, AIX, Unix, Windows, etc., in this case the quality control should be carried out with other design techniques. The objective is to study the current test design techniques that are geared to the development of embedded systems. It can also occur that the electronic circuit should be controlled by means of a web page.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Para el proyecto y cálculo de estructuras metálicas, fundamentalmente pórticos y celosías de cubierta, la herramienta más comúnmente utilizada son los programas informáticos de nudos y barras. En estos programas se define la geometría y sección de las barras, cuyas características mecánicas son perfectamente conocidas, y sobre las cuales obtenemos unos resultados de cálculo concretos en cuanto a estados tensionales y de deformación. Sin embargo el otro componente del modelo, los nudos, presenta mucha mayor complejidad a la hora de establecer sus propiedades mecánicas, fundamentalmente su rigidez al giro, así como de obtener unos resultados de estados tensionales y de deformación en los mismos. Esta “ignorancia” sobre el comportamiento real de los nudos, se salva generalmente asimilando a los nudos del modelo la condición de rígidos o articulados. Si bien los programas de cálculo ofrecen la posibilidad de introducir nudos con una rigidez intermedia (nudos semirrígidos), la rigidez de cada nudo dependerá de la geometría real de la unión, lo cual, dada la gran variedad de geometrías de uniones que en cualquier proyecto se nos presentan, hace prácticamente inviable introducir los coeficientes correspondientes a cada nudo en los modelos de nudos y barras. Tanto el Eurocódigo como el CTE, establecen que cada unión tendrá asociada una curva momento-rotación característica, que deberá ser determinada por los proyectistas mediante herramientas de cálculo o procedimientos experimentales. No obstante, este es un planteamiento difícil de llevar a cabo para cada proyecto. La consecuencia de esto es, que en la práctica, se realizan extensas comprobaciones y justificaciones de cálculo para las barras de las estructuras, dejando en manos de la práctica común la solución y puesta en obra de las uniones, quedando sin justificar ni comprobar la seguridad y el comportamiento real de estas. Otro aspecto que conlleva la falta de caracterización de las uniones, es que desconocemos como afecta el comportamiento real de éstas en los estados tensionales y de deformación de las barras que acometen a ellas, dudas que con frecuencia nos asaltan, no sólo en la fase de proyecto, sino también a la hora de resolver los problemas de ejecución que inevitablemente se nos presentan en el desarrollo de las obras. El cálculo mediante el método de los elementos finitos, es una herramienta que nos permite introducir la geometría real de perfiles y uniones, y nos permite por tanto abordar el comportamiento real de las uniones, y que está condicionado por su geometría. Por ejemplo, un caso típico es el de la unión de una viga a una placa o a un soporte soldando sólo el alma. Es habitual asimilar esta unión a una articulación. Sin embargo, el modelo por elementos finitos nos ofrece su comportamiento real, que es intermedio entre articulado y empotrado, ya que se transmite un momento y el giro es menor que el del apoyo simple. No obstante, la aplicación del modelo de elementos finitos, introduciendo la geometría de todos los elementos estructurales de un entramado metálico, tampoco resulta en general viable desde un punto de vista práctico, dado que requiere invertir mucho tiempo en comparación con el aumento de precisión que obtenemos respecto a los programas de nudos y barras, mucho más rápidos en la fase de modelización de la estructura. En esta tesis se ha abordado, mediante la modelización por elementos finitos, la resolución de una serie de casos tipo representativos de las uniones más comúnmente ejecutadas en obras de edificación, como son las uniones viga-pilar, estableciendo el comportamiento de estas uniones en función de las variables que comúnmente se presentan, y que son: •Ejecución de uniones viga-pilar soldando solo por el alma (unión por el alma), o bien soldando la viga al pilar por todo su perímetro (unión total). •Disposición o no de rigidizadores en los pilares •Uso de pilares de sección 2UPN en cajón o de tipo HEB, que son los tipos de soporte utilizados en casi el 100% de los casos en edificación. Para establecer la influencia de estas variables en el comportamiento de las uniones, y su repercusión en las vigas, se ha realizado un análisis comparativo entre las variables de resultado de los casos estudiados:•Estados tensionales en vigas y uniones. •Momentos en extremo de vigas •Giros totales y relativos en nudos. •Flechas. Otro de los aspectos que nos permite analizar la casuística planteada, es la valoración, desde un punto de vista de costos de ejecución, de la realización de uniones por todo el perímetro frente a las uniones por el alma, o de la disposición o no de rigidizadores en las uniones por todo el perímetro. Los resultados a este respecto, son estrictamente desde un punto de vista económico, sin perjuicio de que la seguridad o las preferencias de los proyectistas aconsejen una solución determinada. Finalmente, un tercer aspecto que nos ha permitido abordar el estudio planteado, es la comparación de resultados que se obtienen por el método de los elementos finitos, más próximos a la realidad, ya que se tiene en cuenta los giros relativos en las uniones, frente a los resultados obtenidos con programas de nudos y barras. De esta forma, podemos seguir usando el modelo de nudos y barras, más versátil y rápido, pero conociendo cuáles son sus limitaciones, y en qué aspectos y en qué medida, debemos ponderar sus resultados. En el último apartado de la tesis se apuntan una serie de temas sobre los que sería interesante profundizar en posteriores estudios, mediante modelos de elementos finitos, con el objeto de conocer mejor el comportamiento de las uniones estructurales metálicas, en aspectos que no se pueden abordar con los programas de nudos y barras. For the project and calculation of steel structures, mainly building frames and cover lattices, the tool more commonly used are the node and bars model computer programs. In these programs we define the geometry and section of the bars, whose mechanical characteristics are perfectly known, and from which we obtain the all calculation results of stresses and displacements. Nevertheless, the other component of the model, the nodes, are much more difficulty for establishing their mechanical properties, mainly the rotation fixity coefficients, as well as the stresses and displacements. This "ignorance" about the real performance of the nodes, is commonly saved by assimilating to them the condition of fixed or articulated. Though the calculation programs offer the possibility to introducing nodes with an intermediate fixity (half-fixed nodes), the fixity of every node will depend on the real connection’s geometry, which, given the great variety of connections geometries that in a project exist, makes practically unviable to introduce the coefficients corresponding to every node in the models of nodes and bars. Both Eurocode and the CTE, establish that every connection will have a typical moment-rotation associated curve, which will have to be determined for the designers by calculation tools or experimental procedures. Nevertheless, this one is an exposition difficult to carry out for each project. The consequence of this, is that in the practice, in projects are extensive checking and calculation reports about the bars of the structures, trusting in hands of the common practice the solution and execution of the connections, resulting without justification and verification their safety and their real behaviour. Another aspect that carries the lack of the connections characterization, is that we don´t know how affects the connections real behaviour in the stresses and displacements of the bars that attack them, doubts that often assault us, not only in the project phase, but also at the moment of solving the execution problems that inevitably happen in the development of the construction works. The calculation by finite element model is a tool that allows us to introduce the real profiles and connections geometry, and allows us to know about the real behaviour of the connections, which is determined by their geometry. Typical example is a beam-plate or beam-support connection welding only by the web. It is usual to assimilate this connection to an articulation or simple support. Nevertheless, the finite element model determines its real performance, which is between articulated and fixed, since a moment is transmitted and the relative rotation is less than the articulation’s rotation. Nevertheless, the application of the finite element model, introducing the geometry of all the structural elements of a metallic structure, does not also turn out to be viable from a practical point of view, provided that it needs to invest a lot of time in comparison with the precision increase that we obtain opposite the node and bars programs, which are much more faster in the structure modelling phase. In this thesis it has been approached, by finite element modelling, the resolution of a representative type cases of the connections commonly used in works of building, since are the beam-support connections, establishing the performance of these connections depending on the variables that commonly are present, which are: •Execution of beam-support connections welding only the web, or welding the beam to the support for the whole perimeter. •Disposition of stiffeners in the supports •Use 2UPN in box section or HEB section, which are the support types used in almost 100% building cases. To establish the influence of these variables in the connections performance, and the repercussion in the beams, a comparative analyse has been made with the resulting variables of the studied cases: •Stresses states in beams and connections. •Bending moments in beam ends. •Total and relative rotations in nodes. •Deflections in beams. Another aspect that the study allows us to analyze, is the valuation, from a costs point of view, of the execution of connections for the whole perimeter opposite to the web connections, or the execution of stiffeners. The results of this analyse, are strictly from an economic point of view, without prejudice that the safety or the preferences of the designers advise a certain solution. Finally, the third aspect that the study has allowed us to approach, is the comparison of the results that are obtained by the finite element model, nearer to the real behaviour, since the relative rotations in the connections are known, opposite to the results obtained with nodes and bars programs. So that, we can use the nodes and bars models, more versatile and quick, but knowing which are its limitations, and in which aspects and measures, we must weight the results. In the last part of the tesis, are relationated some of the topics on which it would be interesting to approach in later studies, with finite elements models, in order to know better the behaviour of the structural steel connections, in aspects that cannot be approached by the nodes and bars programs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The book represents a very interesting example of the possibility to combine in a single publication basic theory of structures and quite advanced topics on the same subject. The author fulfills this objective in a reasonable size book, less than 400 pages divided into 15 chapters averaging 20 pages each plus 9 short appendices. A diskette is also included in the book. This diskette contains training as well practical executable programs on different aspects of structural analysis, such as cross-sections properties,general-purpose computer programs for the static, dynamic and stability analysis of simple bar structures, etc. The book figures are didactic and have been carefully drawn.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The use of the Information and Communication Technologies (ICT) in Learning Environment allows achieving the maximum interaction between Teachers and Students.The Virtual Learning Environments are computer programs that benefit the learning facilitating the communication between users. Open Source software allow to create the own online modular learning environment with a fast placed in service. In the present paper the use of a Learning Management Systems (LMS) as continuous education tool is proposed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Early ancestors of crop simulation models (De Wit, 1965; Monteith, 1965; Duncan et al., 1967) were born before primitive personal computers were available (e.g. Apple II released in 1977, IBM PC released in 1981). Paleo-computer programs were run in mainframes with the support of punch cards. As computers became more available and powerful, crop models evolved into sophisticated tools summarizing our understanding of how crops operate. This evolution was triggered by the need to answer new scientific questions and improve the accuracy of model simulations, especially under limiting conditions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Using computer programs developed for this purpose, we searched for various repeated sequences including inverted, direct tandem, and homopurine–homopyrimidine mirror repeats in various prokaryotes, eukaryotes, and an archaebacterium. Comparison of observed frequencies with expectations revealed that in bacterial genomes and organelles the frequency of different repeats is either random or enriched for inverted and/or direct tandem repeats. By contrast, in all eukaryotic genomes studied, we observed an overrepresentation of all repeats, especially homopurine–homopyrimidine mirror repeats. Analysis of the genomic distribution of all abundant repeats showed that they are virtually excluded from coding sequences. Unexpectedly, the frequencies of abundant repeats normalized for their expectations were almost perfect exponential functions of their size, and for a given repeat this function was indistinguishable between different genomes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Various types of physical mapping data were assembled by developing a set of computer programs (Integrated Mapping Package) to derive a detailed, annotated map of a 4-Mb region of human chromosome 13 that includes the BRCA2 locus. The final assembly consists of a yeast artificial chromosome (YAC) contig with 42 members spanning the 13q12-13 region and aligned contigs of 399 cosmids established by cross-hybridization between the cosmids, which were selected from a chromosome 13-specific cosmid library using inter-Alu PCR probes from the YACs. The end sequences of 60 cosmids spaced nearly evenly across the map were used to generate sequence-tagged sites (STSs), which were mapped to the YACs by PCR. A contig framework was generated by STS content mapping, and the map was assembled on this scaffold. Additional annotation was provided by 72 expressed sequences and 10 genetic markers that were positioned on the map by hybridization to cosmids.