823 resultados para Model Making


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This action research examines the enhancement of visual communication within the architectural design studio through physical model making. „It is through physical model making that designers explore their conceptual ideas and develop the creation and understanding of space,‟ (Salama & Wilkinson 2007:126). This research supplements Crowther‟s findings extending the understanding of visual dialogue to include physical models. „Architecture Design 8‟ is the final core design unit at QUT in the fourth year of the Bachelor of Design Architecture. At this stage it is essential that students have the ability to communicate their ideas in a comprehensive manner, relying on a combination of skill sets including drawing, physical model making, and computer modeling. Observations within this research indicates that students did not integrate the combination of the skill sets in the design process through the first half of the semester by focusing primarily on drawing and computer modeling. The challenge was to promote deeper learning through physical model making. This research addresses one of the primary reasons for the lack of physical model making, which was the limited assessment emphasis on the physical models. The unit was modified midway through the semester to better correlate the lecture theory with studio activities by incorporating a series of model making exercises conducted during the studio time. The outcome of each exercise was assessed. Tutors were surveyed regarding the model making activities and a focus group was conducted to obtain formal feedback from students. Students and tutors recognised the added value in communicating design ideas through physical forms and model making. The studio environment was invigorated by the enhanced learning outcomes of the students who participated in the model making exercises. The conclusions of this research will guide the structure of the upcoming iteration of the fourth year design unit.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

A recent nonlinear system by Friston et al. (2000. NeuroImage 12: 466–477) links the changes in BOLD response to changes in neural activity. The system consists of five subsystems, linking: (1) neural activity to flow changes; (2) flow changes to oxygen delivery to tissue; (3) flow changes to changes in blood volume and venous outflow; (4) changes in flow, volume, and oxygen extraction fraction to deoxyhemoglobin changes; and finally (5) volume and deoxyhemoglobin changes to the BOLD response. Friston et al. exploit, in subsystem 2, a model by Buxton and Frank coupling flow changes to changes in oxygen metabolism which assumes tissue oxygen concentration to be close to zero. We describe below a model of the coupling between flow and oxygen delivery which takes into account the modulatory effect of changes in tissue oxygen concentration. The major development has been to extend the original Buxton and Frank model for oxygen transport to a full dynamic capillary model making the model applicable to both transient and steady state conditions. Furthermore our modification enables us to determine the time series of CMRO2 changes under different conditions, including CO2 challenges. We compare the differences in the performance of the “Friston system” using the original model of Buxton and Frank and that of our model. We also compare the data predicted by our model (with appropriate parameters) to data from a series of OIS studies. The qualitative differences in the behaviour of the models are exposed by different experimental simulations and by comparison with the results of OIS data from brief and extended stimulation protocols and from experiments using hypercapnia.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper discusses distribution and the historical phases of capitalism. It assumes that technical progress and growth are taking place, and, given that, its question is on the functional distribution of income between labor and capital, having as reference classical theory of distribution and Marx’s falling tendency of the rate of profit. Based on the historical experience, it, first, inverts the model, making the rate of profit as the constant variable in the long run and the wage rate, as the residuum; second, it distinguishes three types of technical progress (capital-saving, neutral and capital-using) and applies it to the history of capitalism, having the UK and France as reference. Given these three types of technical progress, it distinguishes four phases of capitalist growth, where only the second is consistent with Marx prediction. The last phase, after World War II, should be, in principle, capital-saving, consistent with growth of wages above productivity. Instead, since the 1970s wages were kept stagnant in rich countries because of, first, the fact that the Information and Communication Technology Revolution proved to be highly capital using, opening room for a new wage of substitution of capital for labor; second, the new competition coming from developing countries; third, the emergence of the technobureaucratic or professional class; and, fourth, the new power of the neoliberal class coalition associating rentier capitalists and financiers

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A design Charrette was the starting point for understanding the different scales within the design process of this architectural intervention. The week-long, intense design activity promoted group interaction amongst students while examining local issues of the Fortitude Valley context. The process was an opportunity for the fourth year architectural design students to collaborate on a complex design problem. Students were asked to identify a unique condition of their site beyond the physical built environment. They were asked to consider the political and social context and respond to these by designing a temporary art gallery for underdeveloped areas within Fortitude Valley. The exhibition shows how architecture can invigorate a space by providing new use and new life.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The most significant recent development in scholarly publishing is the open-access movement, which seeks to provide free online access to scholarly literature. Though this movement is well developed in scientific and medical disciplines, American law reviews are almost completely unaware of the possibilities of open-access publishing models. This Essay explains how open-access publishing works, why it is important, and makes the case for its widespread adoption by law reviews. It also reports on a survey of law review publication policies conducted in 2004. This survey shows, inter alia, that few law reviews have embraced the opportunities of open-access publishing, and many of the top law reviews are acting as stalking horses for the commercial interests of legal database providers. The open-access model promises greater access to legal scholarship, wider readership for law reviews, and reputational befits for law reviews and the law schools that house them. This Essay demonstrates how open access comports with the institutional aims of law schools and law reviews, and is better suited to the unique environment of legal publishing than the model that law reviews currently pursue. Moreover, the institutional structure of law reviews means that it is possible that the entire corpus of law reviews could easily move to an open-access model, making law the first discipline with a realistic prospect of complete commitment to free, open access to all scholarly output.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Interference fits are used extensively in aircraft structural joints because of their improved fatigue performance. Recent advances in analysis of these joints have increased understanding of the nonlinear load-contact and load-interfacial slip variations in these joints. Experimental work in these problems is lacking due to difficulties in determining partial contact and partial slip along the pin-hole interface. In this paper, an experimental procedure is enumerated for determining load-contact relations in interference/clearance fits, using photoelastic models and applying a technique for detecting progress of separation/contact up to predetermined locations. The study incorporates a detailed procedure for model making, controlling interference, locating break of contact up to known locations around the interface, estimating optically the degree of interference, determining interfacial friction and evaluating stresses in the sheet. Experiments, simulating joints in large sheets, were carried out under both pin and plate loads. The present studies provide load-separation behavior in interference joint with finite interfacial friction.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Neste trabalho abordamos a teoria de Ginzburg-Landau da supercondutividade (teoria GL). Apresentamos suas origens, características e resultados mais importantes. A idéia fundamental desta teoria e descrever a transição de fase que sofrem alguns metais de uma fase normal para uma fase supercondutora. Durante uma transição de fase em supercondutores do tipo II é característico o surgimento de linhas de fluxo magnético em determinadas regiões de tamanho finito chamadas comumente de vórtices. A dinâmica destas estruturas topológicas é de grande interesse na comunidade científica atual e impulsiona incontáveis núcleos de pesquisa na área da supercondutividade. Baseado nisto estudamos como essas estruturas topológicas influenciam em uma transição de fase em um modelo bidimensional conhecido como modelo XY. No modelo XY vemos que os principais responsáveis pela transição de fase são os vórtices (na verdade pares de vórtice-antivórtice). Villain, observando este fato, percebeu que poderia tornar explícita a contribuição desses defeitos topológicos na função de partição do modelo XY realizando uma transformação de dualidade. Este modelo serve como inspiração para a proposta deste trabalho. Apresentamos aqui um modelo baseado em considerações físicas sobre sistemas de matéria condensada e ao mesmo tempo utilizamos um formalismo desenvolvido recentemente na referência [29] que possibilita tornar explícita a contribuição dos defeitos topológicos na ação original proposta em nossa teoria. Após isso analisamos alguns limites clássicos e finalmente realizamos as flutuações quânticas visando obter a expressão completa da função correlação dos vórtices o que pode ser muito útil em teorias de vórtices interagentes (dinâmica de vórtices).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper reviews the development of computational fluid dynamics (CFD) specifically for turbomachinery simulations and with a particular focus on application to problems with complex geometry. The review is structured by considering this development as a series of paradigm shifts, followed by asymptotes. The original S1-S2 blade-blade-throughflow model is briefly described, followed by the development of two-dimensional then three-dimensional blade-blade analysis. This in turn evolved from inviscid to viscous analysis and then from steady to unsteady flow simulations. This development trajectory led over a surprisingly small number of years to an accepted approach-a 'CFD orthodoxy'. A very important current area of intense interest and activity in turbomachinery simulation is in accounting for real geometry effects, not just in the secondary air and turbine cooling systems but also associated with the primary path. The requirements here are threefold: capturing and representing these geometries in a computer model; making rapid design changes to these complex geometries; and managing the very large associated computational models on PC clusters. Accordingly, the challenges in the application of the current CFD orthodoxy to complex geometries are described in some detail. The main aim of this paper is to argue that the current CFD orthodoxy is on a new asymptote and is not in fact suited for application to complex geometries and that a paradigm shift must be sought. In particular, the new paradigm must be geometry centric and inherently parallel without serial bottlenecks. The main contribution of this paper is to describe such a potential paradigm shift, inspired by the animation industry, based on a fundamental shift in perspective from explicit to implicit geometry and then illustrate this with a number of applications to turbomachinery.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

he double-detonation explosion scenario of Type Ia supernovae (SNe Ia) has gained increased support from the SN Ia community as a viable progenitor model, making it a promising candidate alongside the well-known single degenerate and double degenerate scenarios. We present delay times of double-detonation SNe, in which a sub-Chandrasekhar mass carbon–oxygen white dwarf (WD) accretes non-dynamically from a helium-rich companion. One of the main uncertainties in quantifying SN rates from double detonations is the (assumed) retention efficiency of He-rich matter. Therefore, we implement a new prescription for the treatment of accretion/accumulation of He-rich matter on WDs. In addition, we test how the results change depending on which criteria are assumed to lead to a detonation in the helium shell. In comparing the results to our standard case (Ruiter et al.), we find that regardless of the adopted He accretion prescription, the SN rates are reduced by only ∼25 per cent if low-mass He shells (≲0.05 M⊙) are sufficient to trigger the detonations. If more massive (0.1 M⊙) shells are needed, the rates decrease by 85 per cent and the delay time distribution is significantly changed in the new accretion model – only SNe with prompt (<500 Myr) delay times are produced. Since theoretical arguments favour low-mass He shells for normal double-detonation SNe, we conclude that the rates from double detonations are likely to be high, and should not critically depend on the adopted prescription for accretion of He.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La frecuencia con la que se producen explosiones sobre edificios, ya sean accidentales o intencionadas, es reducida, pero sus efectos pueden ser catastróficos. Es deseable poder predecir de forma suficientemente precisa las consecuencias de estas acciones dinámicas sobre edificaciones civiles, entre las cuales las estructuras reticuladas de hormigón armado son una tipología habitual. En esta tesis doctoral se exploran distintas opciones prácticas para el modelado y cálculo numérico por ordenador de estructuras de hormigón armado sometidas a explosiones. Se emplean modelos numéricos de elementos finitos con integración explícita en el tiempo, que demuestran su capacidad efectiva para simular los fenómenos físicos y estructurales de dinámica rápida y altamente no lineales que suceden, pudiendo predecir los daños ocasionados tanto por la propia explosión como por el posible colapso progresivo de la estructura. El trabajo se ha llevado a cabo empleando el código comercial de elementos finitos LS-DYNA (Hallquist, 2006), desarrollando en el mismo distintos tipos de modelos de cálculo que se pueden clasificar en dos tipos principales: 1) modelos basados en elementos finitos de continuo, en los que se discretiza directamente el medio continuo mediante grados de libertad nodales de desplazamientos; 2) modelos basados en elementos finitos estructurales, mediante vigas y láminas, que incluyen hipótesis cinemáticas para elementos lineales o superficiales. Estos modelos se desarrollan y discuten a varios niveles distintos: 1) a nivel del comportamiento de los materiales, 2) a nivel de la respuesta de elementos estructurales tales como columnas, vigas o losas, y 3) a nivel de la respuesta de edificios completos o de partes significativas de los mismos. Se desarrollan modelos de elementos finitos de continuo 3D muy detallados que modelizan el hormigón en masa y el acero de armado de forma segregada. El hormigón se representa con un modelo constitutivo del hormigón CSCM (Murray et al., 2007), que tiene un comportamiento inelástico, con diferente respuesta a tracción y compresión, endurecimiento, daño por fisuración y compresión, y rotura. El acero se representa con un modelo constitutivo elastoplástico bilineal con rotura. Se modeliza la geometría precisa del hormigón mediante elementos finitos de continuo 3D y cada una de las barras de armado mediante elementos finitos tipo viga, con su posición exacta dentro de la masa de hormigón. La malla del modelo se construye mediante la superposición de los elementos de continuo de hormigón y los elementos tipo viga de las armaduras segregadas, que son obligadas a seguir la deformación del sólido en cada punto mediante un algoritmo de penalización, simulando así el comportamiento del hormigón armado. En este trabajo se denominarán a estos modelos simplificadamente como modelos de EF de continuo. Con estos modelos de EF de continuo se analiza la respuesta estructural de elementos constructivos (columnas, losas y pórticos) frente a acciones explosivas. Asimismo se han comparado con resultados experimentales, de ensayos sobre vigas y losas con distintas cargas de explosivo, verificándose una coincidencia aceptable y permitiendo una calibración de los parámetros de cálculo. Sin embargo estos modelos tan detallados no son recomendables para analizar edificios completos, ya que el elevado número de elementos finitos que serían necesarios eleva su coste computacional hasta hacerlos inviables para los recursos de cálculo actuales. Adicionalmente, se desarrollan modelos de elementos finitos estructurales (vigas y láminas) que, con un coste computacional reducido, son capaces de reproducir el comportamiento global de la estructura con una precisión similar. Se modelizan igualmente el hormigón en masa y el acero de armado de forma segregada. El hormigón se representa con el modelo constitutivo del hormigón EC2 (Hallquist et al., 2013), que también presenta un comportamiento inelástico, con diferente respuesta a tracción y compresión, endurecimiento, daño por fisuración y compresión, y rotura, y se usa en elementos finitos tipo lámina. El acero se representa de nuevo con un modelo constitutivo elastoplástico bilineal con rotura, usando elementos finitos tipo viga. Se modeliza una geometría equivalente del hormigón y del armado, y se tiene en cuenta la posición relativa del acero dentro de la masa de hormigón. Las mallas de ambos se unen mediante nodos comunes, produciendo una respuesta conjunta. En este trabajo se denominarán a estos modelos simplificadamente como modelos de EF estructurales. Con estos modelos de EF estructurales se simulan los mismos elementos constructivos que con los modelos de EF de continuo, y comparando sus respuestas estructurales frente a explosión se realiza la calibración de los primeros, de forma que se obtiene un comportamiento estructural similar con un coste computacional reducido. Se comprueba que estos mismos modelos, tanto los modelos de EF de continuo como los modelos de EF estructurales, son precisos también para el análisis del fenómeno de colapso progresivo en una estructura, y que se pueden utilizar para el estudio simultáneo de los daños de una explosión y el posterior colapso. Para ello se incluyen formulaciones que permiten considerar las fuerzas debidas al peso propio, sobrecargas y los contactos de unas partes de la estructura sobre otras. Se validan ambos modelos con un ensayo a escala real en el que un módulo con seis columnas y dos plantas colapsa al eliminar una de sus columnas. El coste computacional del modelo de EF de continuo para la simulación de este ensayo es mucho mayor que el del modelo de EF estructurales, lo cual hace inviable su aplicación en edificios completos, mientras que el modelo de EF estructurales presenta una respuesta global suficientemente precisa con un coste asumible. Por último se utilizan los modelos de EF estructurales para analizar explosiones sobre edificios de varias plantas, y se simulan dos escenarios con cargas explosivas para un edificio completo, con un coste computacional moderado. The frequency of explosions on buildings whether they are intended or accidental is small, but they can have catastrophic effects. Being able to predict in a accurate enough manner the consequences of these dynamic actions on civil buildings, among which frame-type reinforced concrete buildings are a frequent typology is desirable. In this doctoral thesis different practical options for the modeling and computer assisted numerical calculation of reinforced concrete structures submitted to explosions are explored. Numerical finite elements models with explicit time-based integration are employed, demonstrating their effective capacity in the simulation of the occurring fast dynamic and highly nonlinear physical and structural phenomena, allowing to predict the damage caused by the explosion itself as well as by the possible progressive collapse of the structure. The work has been carried out with the commercial finite elements code LS-DYNA (Hallquist, 2006), developing several types of calculation model classified in two main types: 1) Models based in continuum finite elements in which the continuous medium is discretized directly by means of nodal displacement degrees of freedom; 2) Models based on structural finite elements, with beams and shells, including kinematic hypothesis for linear and superficial elements. These models are developed and discussed at different levels: 1) material behaviour, 2) response of structural elements such as columns, beams and slabs, and 3) response of complete buildings or significative parts of them. Very detailed 3D continuum finite element models are developed, modeling mass concrete and reinforcement steel in a segregated manner. Concrete is represented with a constitutive concrete model CSCM (Murray et al., 2007), that has an inelastic behaviour, with different tension and compression response, hardening, cracking and compression damage and failure. The steel is represented with an elastic-plastic bilinear model with failure. The actual geometry of the concrete is modeled with 3D continuum finite elements and every and each of the reinforcing bars with beam-type finite elements, with their exact position in the concrete mass. The mesh of the model is generated by the superposition of the concrete continuum elements and the beam-type elements of the segregated reinforcement, which are made to follow the deformation of the solid in each point by means of a penalty algorithm, reproducing the behaviour of reinforced concrete. In this work these models will be called continuum FE models as a simplification. With these continuum FE models the response of construction elements (columns, slabs and frames) under explosive actions are analysed. They have also been compared with experimental results of tests on beams and slabs with various explosive charges, verifying an acceptable coincidence and allowing a calibration of the calculation parameters. These detailed models are however not advised for the analysis of complete buildings, as the high number of finite elements necessary raises its computational cost, making them unreliable for the current calculation resources. In addition to that, structural finite elements (beams and shells) models are developed, which, while having a reduced computational cost, are able to reproduce the global behaviour of the structure with a similar accuracy. Mass concrete and reinforcing steel are also modeled segregated. Concrete is represented with the concrete constitutive model EC2 (Hallquist et al., 2013), which also presents an inelastic behaviour, with a different tension and compression response, hardening, compression and cracking damage and failure, and is used in shell-type finite elements. Steel is represented once again with an elastic-plastic bilineal with failure constitutive model, using beam-type finite elements. An equivalent geometry of the concrete and the steel is modeled, considering the relative position of the steel inside the concrete mass. The meshes of both sets of elements are bound with common nodes, therefore producing a joint response. These models will be called structural FE models as a simplification. With these structural FE models the same construction elements as with the continuum FE models are simulated, and by comparing their response under explosive actions a calibration of the former is carried out, resulting in a similar response with a reduced computational cost. It is verified that both the continuum FE models and the structural FE models are also accurate for the analysis of the phenomenon of progressive collapse of a structure, and that they can be employed for the simultaneous study of an explosion damage and the resulting collapse. Both models are validated with an experimental full-scale test in which a six column, two floors module collapses after the removal of one of its columns. The computational cost of the continuum FE model for the simulation of this test is a lot higher than that of the structural FE model, making it non-viable for its application to full buildings, while the structural FE model presents a global response accurate enough with an admissible cost. Finally, structural FE models are used to analyze explosions on several story buildings, and two scenarios are simulated with explosive charges for a full building, with a moderate computational cost.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We find evidence that conflicts of interest are pervasive in the asset management business owned by investment banks. Using data from 1990 to 2008, we compare the alphas of mutual funds, hedge funds, and institutional funds operated by investment banks and non-bank conglomerates. We find that, while no difference exists in performance by fund type, being owned by an investment bank reduces alphas by 46 basis points per year in our baseline model. Making lead loans increases alphas, but the dispersion of fees across portfolios decreases alphas. The economic loss is $4.9 billion per year.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study explores strategic decision-making (SDM) in micro-firms, an economically significant business subsector. As extant large- and small-firm literature currently proffers an incomplete characterization of SDM in very small enterprises, a multiple-case methodology was used to investigate how these firms make strategic decisions. Eleven Australian Information Technology service micro-firms participated in the study. Using an information-processing lens, the study uncovered patterns of SDM in micro-firms and derived a theoretical micro-firm SDM model. This research also identifies several implications for micro-firm management and directions for future research, contributing to the understanding of micro-firm SDM in both theory and practice.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The significant challenge faced by government in demonstrating value for money in the delivery of major infrastructure resolves around estimating costs and benefits of alternative modes of procurement. Faced with this challenge, one approach is to focus on a dominant performance outcome visible on the opening day of the asset, as the means to select the procurement approach. In this case, value for money becomes a largely nominal concept and determined by selected procurement mode delivering, or not delivering, the selected performance outcome, and notwithstanding possible under delivery on other desirable performance outcomes, as well as possibly incurring excessive transaction costs. This paper proposes a mind-set change in this particular practice, to an approach in which the analysis commences with the conditions pertaining to the project and proceeds to deploy transaction cost and production cost theory to indicate a procurement approach that can claim superior value for money relative to other competing procurement modes. This approach to delivering value for money in relative terms is developed in a first-order procurement decision making model outlined in this paper. The model developed could be complementary to the Public Sector Comparator (PSC) in terms of cross validation and the model more readily lends itself to public dissemination. As a possible alternative to the PSC, the model could save time and money in preparation of project details to lesser extent than that required in the reference project and may send a stronger signal to the market that may encourage more innovation and competition.