968 resultados para Burroughs D-machine (Computer)
Resumo:
To date, biodegradable networks and particularly their kinetic chain lengths have been characterized by analysis of their degradation products in solution. We characterize the network itself by NMR analysis in the solvent-swollen state under magic angle spinning conditions. The networks were prepared by photoinitiated cross-linking of poly(dl-lactide)−dimethacrylate macromers (5 kg/mol) in the presence of an unreactive diluent. Using diffusion filtering and 2D correlation spectroscopy techniques, all network components are identified. By quantification of network-bound photoinitiator fragments, an average kinetic chain length of 9 ± 2 methacrylate units is determined. The PDLLA macromer solution was also used with a dye to prepare computer-designed structures by stereolithography. For these networks structures, the average kinetic chain length is 24 ± 4 methacrylate units. In all cases the calculated molecular weights of the polymethacrylate chains after degradation are maximally 8.8 kg/mol, which is far below the threshold for renal clearance. Upon incubation in phosphate buffered saline at 37 °C, the networks show a similar mass loss profile in time as linear high-molecular-weight PDLLA (HMW PDLLA). The mechanical properties are preserved longer for the PDLLA networks than for HMW PDLLA. The initial tensile strength of 47 ± 2 MPa does not decrease significantly for the first 15 weeks, while HMW PDLLA lost 85 ± 5% of its strength within 5 weeks. The physical properties, kinetic chain length, and degradation profile of these photo-cross-linked PDLLA networks make them most suited materials for orthopedic applications and use in (bone) tissue engineering.
Resumo:
In this paper, we presented an automatic system for precise urban road model reconstruction based on aerial images with high spatial resolution. The proposed approach consists of two steps: i) road surface detection and ii) road pavement marking extraction. In the first step, support vector machine (SVM) was utilized to classify the images into two categories: road and non-road. In the second step, road lane markings are further extracted on the generated road surface based on 2D Gabor filters. The experiments using several pan-sharpened aerial images of Brisbane, Queensland have validated the proposed method.
Resumo:
This paper reports on the empirical comparison of seven machine learning algorithms in texture classification with application to vegetation management in power line corridors. Aiming at classifying tree species in power line corridors, object-based method is employed. Individual tree crowns are segmented as the basic classification units and three classic texture features are extracted as the input to the classification algorithms. Several widely used performance metrics are used to evaluate the classification algorithms. The experimental results demonstrate that the classification performance depends on the performance matrix, the characteristics of datasets and the feature used.
Resumo:
Real‐time kinematic (RTK) GPS techniques have been extensively developed for applications including surveying, structural monitoring, and machine automation. Limitations of the existing RTK techniques that hinder their applications for geodynamics purposes are twofold: (1) the achievable RTK accuracy is on the level of a few centimeters and the uncertainty of vertical component is 1.5–2 times worse than those of horizontal components and (2) the RTK position uncertainty grows in proportional to the base‐torover distances. The key limiting factor behind the problems is the significant effect of residual tropospheric errors on the positioning solutions, especially on the highly correlated height component. This paper develops the geometry‐specified troposphere decorrelation strategy to achieve the subcentimeter kinematic positioning accuracy in all three components. The key is to set up a relative zenith tropospheric delay (RZTD) parameter to absorb the residual tropospheric effects and to solve the established model as an ill‐posed problem using the regularization method. In order to compute a reasonable regularization parameter to obtain an optimal regularized solution, the covariance matrix of positional parameters estimated without the RZTD parameter, which is characterized by observation geometry, is used to replace the quadratic matrix of their “true” values. As a result, the regularization parameter is adaptively computed with variation of observation geometry. The experiment results show that new method can efficiently alleviate the model’s ill condition and stabilize the solution from a single data epoch. Compared to the results from the conventional least squares method, the new method can improve the longrange RTK solution precision from several centimeters to the subcentimeter in all components. More significantly, the precision of the height component is even higher. Several geosciences applications that require subcentimeter real‐time solutions can largely benefit from the proposed approach, such as monitoring of earthquakes and large dams in real‐time, high‐precision GPS leveling and refinement of the vertical datum. In addition, the high‐resolution RZTD solutions can contribute to effective recovery of tropospheric slant path delays in order to establish a 4‐D troposphere tomography.
Resumo:
As computer applications become more available—both technically and economically—construction project managers are increasingly able to access advanced computer tools capable of transforming the role that project managers have typically performed. Competence at using these tools requires a dual commitment in training—from the individual and the firm. Improving the computer skills of project managers can provide construction firms with a competitive advantage to differentiate from others in an increasingly competitive international market. Yet, few published studies have quantified what existing level of competence construction project managers have. Identification of project managers’ existing computer application skills is a necessary first step to developing more directed training to better capture the benefits of computer applications. This paper discusses the yet to be released results of a series of surveys undertaken in Malaysia, Singapore, Indonesia, Australia and the United States through QUT’s School of Construction Management and Property and the M.E. Rinker, Sr. School of Building Construction at the University of Florida. This international survey reviews the use and reported competence in using a series of commercially-available computer applications by construction project managers. The five different country locations of the survey allow cross-national comparisons to be made between project managers undertaking continuing professional development programs. The results highlight a shortfall in the ability of construction project managers to capture potential benefits provided by advanced computer applications and provide directions for targeted industry training programs. This international survey also provides a unique insight to the cross-national usage of advanced computer applications and forms an important step in this ongoing joint review of technology and the construction project manager.
Resumo:
Statement: Jams, Jelly Beans and the Fruits of Passion Let us search, instead, for an epistemology of practice implicit in the artistic, intuitive processes which some practitioners do bring to situations of uncertainty, instability, uniqueness, and value conflict. (Schön 1983, p40) Game On was born out of the idea of creative community; finding, networking, supporting and inspiring the people behind the face of an industry, those in the mist of the machine and those intending to join. We understood this moment to be a pivotal opportunity to nurture a new emerging form of game making, in an era of change, where the old industry models were proving to be unsustainable. As soon as we started putting people into a room under pressure, to make something in 48hrs, a whole pile of evolutionary creative responses emerged. People refashioned their craft in a moment of intense creativity that demanded different ways of working, an adaptive approach to the craft of making games – small – fast – indie. An event like the 48hrs forces participants’ attention on the process as much as the outcome. As one game industry professional taking part in a challenge for the first time observed: there are three paths in the genesis from idea to finished work: the path that focuses on mechanics; the path that focuses on team structure and roles and the path that focuses on the idea, the spirit – and the more successful teams need to put the spirit of the work first and foremost. The spirit drives the adaptation, it becomes improvisation. As Schön says: “Improvisation consists on varying, combining and recombining a set of figures within the schema which bounds and gives coherence to the performance.” (1983, p55). This improvisational approach is all about those making the games: the people and the principles of their creative process. This documentation evidences the intensity of their passion, determination and the shit that they are prepared to put themselves through to achieve their goal – to win a cup full of jellybeans and make a working game in 48hrs. 48hr is a project where, on all levels, analogue meets digital. This concept was further explored through the documentation process. This set of four videos were created by Cameron Owen on the fly during the challenge using both the iphone video camera and editing software in order to be available with immediacy and allow the event audience to share the experience - and perhaps to give some insights into the creative process exposed by the 48 hour challenge. ____________________________ Schön, D. A. 1983, The Reflective Practitioner: How Professionals Think in Action, Basic Books, New York
Resumo:
Coral reefs are biologically complex ecosystems that support a wide variety of marine organisms. These are fragile communities under enormous threat from natural and human-based influences. Properly assessing and measuring the growth and health of reefs is essential to understanding impacts of ocean acidification, coastal urbanisation and global warming. In this paper, we present an innovative 3-D reconstruction technique based on visual imagery as a non-intrusive, repeatable, in situ method for estimating physical parameters, such as surface area and volume for efficient assessment of long-term variability. The reconstruction algorithms are presented, and benchmarked using an existing data set. We validate the technique underwater, utilising a commercial-off-the-shelf camera and a piece of staghorn coral, Acropora cervicornis. The resulting reconstruction is compared with a laser scan of the coral piece for assessment and validation. The comparison shows that 77% of the pixels in the reconstruction are within 0.3 mm of the ground truth laser scan. Reconstruction results from an unknown video camera are also presented as a segue to future applications of this research.
Resumo:
The statutory derivative action was introduced in Australia in 2000. This right of action has been debated in the literature and introduced in a number of other jurisdictions as well. However, it is by no means clear that all issues have been resolved despite its operation in Australia for over 10 years. This article considers the application of Pt 2F.1A of the Corporations Act to companies in liquidation under Ch 5. It demonstrates that the application involves consideration of not only proper statutory interpretation but also policy matters around the role and the supervision by the court of a liquidator once a company has entered liquidation.
Resumo:
Machine learning has become a valuable tool for detecting and preventing malicious activity. However, as more applications employ machine learning techniques in adversarial decision-making situations, increasingly powerful attacks become possible against machine learning systems. In this paper, we present three broad research directions towards the end of developing truly secure learning. First, we suggest that finding bounds on adversarial influence is important to understand the limits of what an attacker can and cannot do to a learning system. Second, we investigate the value of adversarial capabilities-the success of an attack depends largely on what types of information and influence the attacker has. Finally, we propose directions in technologies for secure learning and suggest lines of investigation into secure techniques for learning in adversarial environments. We intend this paper to foster discussion about the security of machine learning, and we believe that the research directions we propose represent the most important directions to pursue in the quest for secure learning.
Resumo:
A diagnostic method based on Bayesian Networks (probabilistic graphical models) is presented. Unlike conventional diagnostic approaches, in this method instead of focusing on system residuals at one or a few operating points, diagnosis is done by analyzing system behavior patterns over a window of operation. It is shown how this approach can loosen the dependency of diagnostic methods on precise system modeling while maintaining the desired characteristics of fault detection and diagnosis (FDD) tools (fault isolation, robustness, adaptability, and scalability) at a satisfactory level. As an example, the method is applied to fault diagnosis in HVAC systems, an area with considerable modeling and sensor network constraints.
Resumo:
RÉSUMÉ. La prise en compte des troubles de la communication dans l’utilisation des systèmes de recherche d’information tels qu’on peut en trouver sur le Web est généralement réalisée par des interfaces utilisant des modalités n’impliquant pas la lecture et l’écriture. Peu d’applications existent pour aider l’utilisateur en difficulté dans la modalité textuelle. Nous proposons la prise en compte de la conscience phonologique pour assister l’utilisateur en difficulté d’écriture de requêtes (dysorthographie) ou de lecture de documents (dyslexie). En premier lieu un système de réécriture et d’interprétation des requêtes entrées au clavier par l’utilisateur est proposé : en s’appuyant sur les causes de la dysorthographie et sur les exemples à notre disposition, il est apparu qu’un système combinant une approche éditoriale (type correcteur orthographique) et une approche orale (système de transcription automatique) était plus approprié. En second lieu une méthode d’apprentissage automatique utilise des critères spécifiques , tels que la cohésion grapho-phonémique, pour estimer la lisibilité d’une phrase, puis d’un texte. ABSTRACT. Most applications intend to help disabled users in the information retrieval process by proposing non-textual modalities. This paper introduces specific parameters linked to phonological awareness in the textual modality. This will enhance the ability of systems to deal with orthographic issues and with the adaptation of results to the reader when for example the reader is dyslexic. We propose a phonology based sentence level rewriting system that combines spelling correction, speech synthesis and automatic speech recognition. This has been evaluated on a corpus of questions we get from dyslexic children. We propose a specific sentence readability measure that involves phonetic parameters such as grapho-phonemic cohesion. This has been learned on a corpus of reading time of sentences read by dyslexic children.
Resumo:
In an age where digital innovation knows no boundaries, research in the area of brain-computer interface and other neural interface devices go where none have gone before. The possibilities are endless and as dreams become reality, the implications of these amazing developments should be considered. Some of these new devices have been created to correct or minimise the effects of disease or injury so the paper discusses some of the current research and development in the area, including neuroprosthetics. To assist researchers and academics in identifying some of the legal and ethical issues that might arise as a result of research and development of neural interface devices, using both non-invasive techniques and invasive procedures, the paper discusses a number of recent observations of authors in the field. The issue of enhancing human attributes by incorporating these new devices is also considered. Such enhancement may be regarded as freeing the mind from the constraints of the body, but there are legal and moral issues that researchers and academics would be well advised to contemplate as these new devices are developed and used. While different fact situation surround each of these new devices, and those that are yet to come, consideration of the legal and ethical landscape may assist researchers and academics in dealing effectively with matters that arise in these times of transition. Lawyers could seek to facilitate the resolution of the legal disputes that arise in this area of research and development within the existing judicial and legislative frameworks. Whether these frameworks will suffice, or will need to change in order to enable effective resolution, is a broader question to be explored.
Resumo:
Statement: Jams, Jelly Beans and the Fruits of Passion Let us search, instead, for an epistemology of practice implicit in the artistic, intuitive processes which some practitioners do bring to situations of uncertainty, instability, uniqueness, and value conflict. (Schön 1983, p40) Game On was born out of the idea of creative community; finding, networking, supporting and inspiring the people behind the face of an industry, those in the mist of the machine and those intending to join. We understood this moment to be a pivotal opportunity to nurture a new emerging form of game making, in an era of change, where the old industry models were proving to be unsustainable. As soon as we started putting people into a room under pressure, to make something in 48hrs, a whole pile of evolutionary creative responses emerged. People refashioned their craft in a moment of intense creativity that demanded different ways of working, an adaptive approach to the craft of making games – small – fast – indie. An event like the 48hrs forces participants’ attention onto the process as much as the outcome. As one game industry professional taking part in a challenge for the first time observed: there are three paths in the genesis from idea to finished work: the path that focuses on mechanics; the path that focuses on team structure and roles, and the path that focuses on the idea, the spirit – and the more successful teams put the spirit of the work first and foremost. The spirit drives the adaptation, it becomes improvisation. As Schön says: “Improvisation consists on varying, combining and recombining a set of figures within the schema which bounds and gives coherence to the performance.” (1983, p55). This improvisational approach is all about those making the games: the people and the principles of their creative process. This documentation evidences the intensity of their passion, determination and the shit that they are prepared to put themselves through to achieve their goal – to win a cup full of jellybeans and make a working game in 48hrs. 48hr is a project where, on all levels, analogue meets digital. This concept was further explored through the documentation process. All of these pictures were taken with a 1945 Leica III camera. The use of this classic, film-based camera, gives the images a granularity and depth, this older slower technology exposes the very human moments of digital creativity. ____________________________ Schön, D. A. 1983, The Reflective Practitioner: How Professionals Think in Action, Basic Books, New York
Resumo:
This paper describes a vision-based airborne collision avoidance system developed by the Australian Research Centre for Aerospace Automation (ARCAA) under its Dynamic Sense-and-Act (DSA) program. We outline the system architecture and the flight testing undertaken to validate the system performance under realistic collision course scenarios. The proposed system could be implemented in either manned or unmanned aircraft, and represents a step forward in the development of a “sense-and-avoid” capability equivalent to human “see-and-avoid”.