1000 resultados para Neoclassicism (Architecture)


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Le progrès scientifique et technologique n'est pas sans faille – les conséquences imprévues de son application peuvent causer de nouveaux problèmes. Tel est le constat machiavélien sur lequel est fondé le projet En imparfaite santé : la médicalisation de l'architecture du Centre Canadien d'Architecture (2011-2012), présenté sous forme d'exposition et de catalogue. Ce mémoire étudie comment les deux plateformes, la première étant expérientielle et la seconde théorique, formulent une critique du processus de la médicalisation actuelle, lequel est entré dans le champ de l'architecture contemporaine. L’exposition est approchée comme discours et comme installation d’objets pour un public; une attention particulière est alors portée à la scénographie et au parcours du visiteur. D’autres réflexions ont pour objet le graphisme, un outil soutenant le leitmotiv de confrontation. Dans l’étude du catalogue, l’accent est mis sur l’essai d’introduction, qui est implicitement traversé par le concept fondamentalement ambivalent de pharmakon. Le péritexte, l’encadrement physique du contenu principal de l’ouvrage, est aussi examiné. Ensuite, l’analyse comparative propose que chaque plateforme véhicule un propos différent, une stratégie rendue possible par l’ambivalence de la notion de corps, entendue littéralement et métaphoriquement. La conclusion finale du mémoire esquisse une courte proposition de contextualisation, autant de cette dualité que de la remise en question de l’autorité du discours techno-scientifique. Bien qu’En imparfaite santé dirige sa critique envers la persistance de la vision moderniste de l'architecture, nous avançons que le projet concerne tout autant, sinon plus, l'omniprésence actuelle du numérique. Ce dernier, à l’instar de l’architecture moderne, ne modifie pas seulement la conception du corps humain et architectural, il renforce également une croyance positiviste dans la technologie qui n'est pas toujours contrebalancée par la pensée critique.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Depuis la publication de l’Utopia de Thomas More au XVIe siècle, la notion d’utopie s’est vue appropriée par différents domaines d’expression artistique. Bien vite, l’architecture et l’urbanisme en font leur apanage lorsqu’il est question de concilier, en des dessins et des plans, des sociétés idéalisées et leurs représentations. La modernité et les nouvelles technologies modifient les modalités de l’utopie qui tend alors vers l’actualisation de ses modèles en des projets construits. Le XXe siècle est aussi marqué par une abondance de formes et d’idées dont la transmission et le partage sont accélérés par la création de nouveaux médias. Si les années 1960 et 1970 sont le lieu d’émergence de formes expérimentales et de projets utopiques, notamment alimentés par la Révolution tranquille et Mai 68, il est encore difficile au Québec de retracer ces projets en arts et en architecture puisqu’ils sont peu documentés. Par l’étude de la pratique artistique d’Yvette Bisson (1926-), de Robert Roussil (1925-2013) et de Melvin Charney (1935-2012), ce mémoire propose d’observer les différentes tactiques d’appropriation de l’espace auxquelles s’apparentent les modalités de la sculpture de ces trois artistes. Par l’intermédiaire de Michel de Certeau, Henri Lefebvre et Louis Marin, nous chercherons à expliquer quelle est la teneur critique des imaginaires mis en œuvre par les trois artistes pour créer de nouveaux lieux utopiques de la sculpture.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Polymers made of poly(ethylene glycol) chains grafted to poly(lactic acid) chains (PEG-g-PLA) were used to produce stealth drug nanocarriers. A library of comb-like PEG-g-PLA polymers with different PEG grafting densities was prepared in order to obtain nanocarriers with dense PEG brushes at their surface, stability in suspension, and resistance to protein adsorption. The structural properties of nanoparticles (NPs) produced from these polymers by a surfactant-free method were assessed by DLS, zeta potential, and TEM and were found to be controlled by the amount of PEG present in the polymers. A critical transition from a solid NP structure to a soft particle with either a “micelle-like” or “polymer nano-aggregate” structure was observed when the PEG content was between 15 to 25% w/w. This structural transition was found to have a profound impact on the size of the NPs, their surface charge, their stability in suspension in presence of salts as well as on the binding of proteins to the surface of the NPs. The arrangement of the PEG-g-PLA chains at the surface of the NPs was investigated by 1H NMR and X-ray photoelectron spectroscopy (XPS). NMR results confirmed that the PEG chains were mostly segregated at the NP surface. Moreover, XPS and quantitative NMR allowed quantifying the PEG chain coverage density at the surface of the solid NPs. Concordance of the results between the two methods was found to be remarkable. Physical-chemical properties of the NPs such as resistance to aggregation in saline environment as well as anti-fouling efficacy were related to the PEG surface density and ultimately to polymer architecture. Resistance to protein adsorption was assessed by isothermal titration calorimetry (ITC) using lysozyme. The results indicate a correlation between PEG surface coverage and level of protein interactions. The results obtained lead us to propose such PEG-g-PLA polymers for nanomedecine development as an alternative to the predominant polyester-PEG diblock polymers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We developed a nanoparticles (NPs) library from poly(ethylene glycol)–poly lactic acid comb-like polymers with variable amount of PEG. Curcumin was encapsulated in the NPs with a view to develop a delivery platform to treat diseases involving oxidative stress affecting the CNS. We observed a sharp decrease in size between 15 and 20% w/w of PEG which corresponds to a transition from a large solid particle structure to a “micelle-like” or “polymer nano-aggregate” structure. Drug loading, loading efficacy and release kinetics were determined. The diffusion coefficients of curcumin in NPs were determined using a mathematical modeling. The higher diffusion was observed for solid particles compared to “polymer nano-aggregate” particles. NPs did not present any significant toxicity when tested in vitro on a neuronal cell line. Moreover, the ability of NPs carrying curcumin to prevent oxidative stress was evidenced and linked to polymer architecture and NPs organization. Our study showed the intimate relationship between the polymer architecture and the biophysical properties of the resulting NPs and sheds light on new approaches to design efficient NP-based drug carriers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Department of Mathematics, Cochin University of Science and Technology

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we have evolved a generic software architecture for a domain specific distributed embedded system. The system under consideration belongs to the Command, Control and Communication systems domain. The systems in such domain have very long operational lifetime. The quality attributes of these systems are equally important as the functional requirements. The main guiding principle followed in this paper for evolving the software architecture has been functional independence of the modules. The quality attributes considered most important for the system are maintainability and modifiability. Architectural styles best suited for the functionally independent modules are proposed with focus on these quality attributes. The software architecture for the system is envisioned as a collection of architecture styles of the functionally independent modules identified

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Speech is the primary, most prominent and convenient means of communication in audible language. Through speech, people can express their thoughts, feelings or perceptions by the articulation of words. Human speech is a complex signal which is non stationary in nature. It consists of immensely rich information about the words spoken, accent, attitude of the speaker, expression, intention, sex, emotion as well as style. The main objective of Automatic Speech Recognition (ASR) is to identify whatever people speak by means of computer algorithms. This enables people to communicate with a computer in a natural spoken language. Automatic recognition of speech by machines has been one of the most exciting, significant and challenging areas of research in the field of signal processing over the past five to six decades. Despite the developments and intensive research done in this area, the performance of ASR is still lower than that of speech recognition by humans and is yet to achieve a completely reliable performance level. The main objective of this thesis is to develop an efficient speech recognition system for recognising speaker independent isolated words in Malayalam.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Globalization is widely regarded as the rise of the borderless world. However in practice, true globalization points rather to a “spatial logic” by which globalization is manifested locally in the shape of insular space. Globalization in this sense is not merely about the creation of physical fragmentation of space but also the creation of social disintegration. This study tries to proof that global processes also create various forms of insular space leading also to specific social implications. In order to examine the problem this study looks at two cases: China’s Pearl River Delta (PRD) and Jakarta in Indonesia. The PRD case reveals three forms of insular space namely the modular, concealed and the hierarchical. The modular points to the form of enclosed factories where workers are vulnerable for human-right violations due to the absent of public control. The concealed refers to the production of insular space by subtle discrimination against certain social groups in urban space. And the hierarchical points to a production of insular space that is formed by an imbalanced population flow. The Jakarta case attempts to show more types of insularity in relation to the complexity of a mega-city which is shaped by a culture of exclusion. Those are dormant and hollow insularity. The dormant refers to the genesis of insular– radical – community from a culture of resistance. The last type, the hollow, points to the process of making a “pseudo community” where sense of community is not really developed as well as weak social relationship with its surrounding. Although global process creates various expressions of territorial insularization, however, this study finds that the “line of flight” is always present, where the border of insularity is crossed. The PRD’s produces vernacular modernization done by peasants which is less likely to be controlled by the politics of insularization. In Jakarta, the culture of insularization causes urban informalities that have no space, neither spatially nor socially; hence their state of ephemerality continues as a tactic of place-making. This study argues that these crossings possess the potential for reconciling venue to defuse the power of insularity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: The most common application of imputation is to infer genotypes of a high-density panel of markers on animals that are genotyped for a low-density panel. However, the increase in accuracy of genomic predictions resulting from an increase in the number of markers tends to reach a plateau beyond a certain density. Another application of imputation is to increase the size of the training set with un-genotyped animals. This strategy can be particularly successful when a set of closely related individuals are genotyped. ----- Methods: Imputation on completely un-genotyped dams was performed using known genotypes from the sire of each dam, one offspring and the offspring’s sire. Two methods were applied based on either allele or haplotype frequencies to infer genotypes at ambiguous loci. Results of these methods and of two available software packages were compared. Quality of imputation under different population structures was assessed. The impact of using imputed dams to enlarge training sets on the accuracy of genomic predictions was evaluated for different populations, heritabilities and sizes of training sets. ----- Results: Imputation accuracy ranged from 0.52 to 0.93 depending on the population structure and the method used. The method that used allele frequencies performed better than the method based on haplotype frequencies. Accuracy of imputation was higher for populations with higher levels of linkage disequilibrium and with larger proportions of markers with more extreme allele frequencies. Inclusion of imputed dams in the training set increased the accuracy of genomic predictions. Gains in accuracy ranged from close to zero to 37.14%, depending on the simulated scenario. Generally, the larger the accuracy already obtained with the genotyped training set, the lower the increase in accuracy achieved by adding imputed dams. ----- Conclusions: Whenever a reference population resembling the family configuration considered here is available, imputation can be used to achieve an extra increase in accuracy of genomic predictions by enlarging the training set with completely un-genotyped dams. This strategy was shown to be particularly useful for populations with lower levels of linkage disequilibrium, for genomic selection on traits with low heritability, and for species or breeds for which the size of the reference population is limited.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this study was to develop an internet-based seminar framework applicable for landscape architecture education. This process was accompanied by various aims. The basic expectation was to keep the main characteristics of landscape architecture education also in the online format. On top of that, four further objectives were anticipated: (1) training of competences for virtual team work, (2) fostering intercultural competence, (3) creation of equal opportunities for education through internet-based open access and (4) synergy effects and learning processes across institutional boundaries. This work started with the hypothesis that these four expected advantages would compensate for additional organisational efforts caused by the online delivery of the seminars and thus lead to a sustainable integration of this new learning mode into landscape architecture curricula. This rationale was followed by a presentation of four areas of knowledge to which the seminar development was directly related (1) landscape architecture as a subject and its pedagogy, (2) general learning theories, (3) developments in the ICT sector and (4) wider societal driving forces such as global citizenship and the increase of open educational resources. The research design took the shape of a pedagogical action research cycle. This approach was constructive: The author herself is teaching international landscape architecture students so that the model could directly be applied in practice. Seven online seminars were implemented in the period from 2008 to 2013 and this experience represents the core of this study. The seminars were conducted with varying themes while its pedagogy, organisation and the technological tools remained widely identical. The research design is further based on three levels of observation: (1) the seminar design on the basis of theory and methods from the learning sciences, in particular educational constructivism, (2) the seminar evaluation and (3) the evaluation of the seminars’ long term impact. The seminar model itself basically consists of four elements: (1) the taxonomy of learning objectives, (2) ICT tools and their application and pedagogy, (3) process models and (4) the case study framework. The seminar framework was followed by the presentation of the evaluation findings. The major findings of this study can be summed up as follows: Implementing online seminars across educational and national boundaries was possible both in term of organisation and technology. In particular, a high level of cultural diversity among the seminar participants has definitively been achieved. However, there were also obvious obstacles. These were primarily competing study commitments and incompatible schedules among the students attending from different academic programmes, partly even in different time zones. Both factors had negative impact on the individual and working group performances. With respect to the technical framework it can be concluded that the majority of the participants were able to use the tools either directly without any problem or after overcoming some smaller problems. Also the seminar wiki was intensively used for completing the seminar assignments. However, too less truly collaborative text production was observed which could be improved by changing the requirements for the collaborative task. Two different process models have been applied for guiding the collaboration of the small groups and both were in general successful. However, it needs to be said that even if the students were able to follow the collaborative task and to co-construct and compare case studies, most of them were not able to synthesize the knowledge they had compiled. This means that the area of consideration often remained on the level of the case and further reflections, generalisations and critique were largely missing. This shows that the seminar model needs to find better ways for triggering knowledge building and critical reflection. It was also suggested to have a more differentiated group building strategy in future seminars. A comparison of pre- and post seminar concept maps showed that an increase of factual and conceptual knowledge on the individual level was widely recognizable. Also the evaluation of the case studies (the major seminar output) revealed that the students have undergone developments of both the factual and the conceptual knowledge domain. Also their self-assessment with respect to individual learning development showed that the highest consensus was achieved in the field of subject-specific knowledge. The participants were much more doubtful with regard to the progress of generic competences such as analysis, communication and organisation. However, 50% of the participants confirmed that they perceived individual development on all competence areas the survey had asked for. Have the additional four targets been met? Concerning the competences for working in a virtual team it can be concluded that the vast majority was able to use the internet-based tools and to work with them in a target-oriented way. However, there were obvious differences regarding the intensity and activity of participation, both because of external and personal factors. A very positive aspect is the achievement of a high cultural diversity supporting the participants’ intercultural competence. Learning from group members was obviously a success factor for the working groups. Regarding the possibilities for better accessibility of educational opportunities it became clear that a significant number of participants were not able to go abroad during their studies because of financial or personal reasons. They confirmed that the online seminar was to some extent a compensation for not having been abroad for studying. Inter-institutional learning and synergy was achieved in so far that many teachers from different countries contributed with individual lectures. However, those teachers hardly ever followed more than one session. Therefore, the learning effect remained largely within the seminar learning group. Looking back at the research design it can be said that the pedagogical action research cycle was an appropriate and valuable approach allowing for strong interaction between theory and practice. However, some more external evaluation from peers in particular regarding the participants’ products would have been valuable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Scheme86 and the HP Precision Architectures represent different trends in computer processor design. The former uses wide micro-instructions, parallel hardware, and a low latency memory interface. The latter encourages pipelined implementation and visible interlocks. To compare the merits of these approaches, algorithms frequently encountered in numerical and symbolic computation were hand-coded for each architecture. Timings were done in simulators and the results were evaluated to determine the speed of each design. Based on these measurements, conclusions were drawn as to which aspects of each architecture are suitable for a high- performance computer.