942 resultados para 291605 Processor Architectures
Resumo:
This study evaluated the effect of eye muscle area (EMA), ossification, carcass weight, marbling and rib fat depth on the incidence of dark cutting (pH u > 5.7) using routinely collected Meat Standards Australia (MSA) data. Data was obtained from 204,072 carcasses at a Western Australian processor between 2002 and 2008. Binomial data of pH u compliance was analysed using a logit model in a Bayesian framework. Increasing eye muscle area from 40 to 80 cm 2, increased pH u compliance by around 14% (P < 0.001) in carcasses less than 350 kg. As carcass weight increased from 150 kg to 220 kg, compliance increased by 13% (P < 0.001) and younger cattle with lower ossification were also 7% more compliant (P < 0.001). As rib fat depth increased from 0 to 20 mm, pH u compliance increased by around 10% (P < 0.001) yet marbling had no effect on dark cutting. Increasing musculature and growth combined with good nutrition will minimise dark cutting beef in Australia.
Resumo:
Many computationally intensive scientific applications involve repetitive floating point operations other than addition and multiplication which may present a significant performance bottleneck due to the relatively large latency or low throughput involved in executing such arithmetic primitives on commod- ity processors. A promising alternative is to execute such primitives on Field Programmable Gate Array (FPGA) hardware acting as an application-specific custom co-processor in a high performance reconfig- urable computing platform. The use of FPGAs can provide advantages such as fine-grain parallelism but issues relating to code development in a hardware description language and efficient data transfer to and from the FPGA chip can present significant application development challenges. In this paper, we discuss our practical experiences in developing a selection of floating point hardware designs to be implemented using FPGAs. Our designs include some basic mathemati cal library functions which can be implemented for user defined precisions suitable for novel applications requiring non-standard floating point represen- tation. We discuss the details of our designs along with results from performance and accuracy analysis tests.
Resumo:
Architecture Post Mortem surveys architecture’s encounter with death, decline, and ruination following late capitalism. As the world moves closer to an economic abyss that many perceive to be the death of capital, contraction and crisis are no longer mere phases of normal market fluctuations, but rather the irruption of the unconscious of ideology itself. Post mortem is that historical moment wherein architecture’s symbolic contract with capital is put on stage, naked to all. Architecture is not irrelevant to fiscal and political contagion as is commonly believed; it is the victim and penetrating analytical agent of the current crisis. As the very apparatus for modernity’s guilt and unfulfilled drives-modernity’s debt-architecture is that ideological element that functions as a master signifier of its own destruction, ordering all other signifiers and modes of signification beneath it. It is under these conditions that architecture theory has retreated to an “Alamo” of history, a final desert outpost where history has been asked to transcend itself. For architecture’s hoped-for utopia always involves an apocalypse. This timely collection of essays reformulates architecture’s relation to modernity via the operational death-drive: architecture is but a passage between life and death. This collection includes essays by Kazi K. Ashraf, David Bertolini, Simone Brott, Peggy Deamer, Didem Ekici, Paul Emmons, Donald Kunze, Todd McGowan, Gevork Hartoonian, Nadir Lahiji, Erika Naginski, and Dennis Maher. Contents: Introduction: ‘the way things are’, Donald Kunze; Driven into the public: the psychic constitution of space, Todd McGowan; Dead or alive in Joburg, Simone Brott; Building in-between the two deaths: a post mortem manifesto, Nadir Lahiji; Kant, Sade, ethics and architecture, David Bertolini; Post mortem: building deconstruction, Kazi K. Ashraf; The slow-fast architecture of love in the ruins, Donald Kunze; Progress: re-building the ruins of architecture, Gevork Hartoonian; Adrian Stokes: surface suicide, Peggy Deamer; A window to the soul: depth in the early modern section drawing, Paul Emmons; Preliminary thoughts on Piranesi and Vico, Erika Naginski; architectural asceticism and austerity, Didem Ekici; 900 miles to Paradise, and other afterlives of architecture, Dennis Maher; Index.
Resumo:
This paper presents an efficient face detection method suitable for real-time surveillance applications. Improved efficiency is achieved by constraining the search window of an AdaBoost face detector to pre-selected regions. Firstly, the proposed method takes a sparse grid of sample pixels from the image to reduce whole image scan time. A fusion of foreground segmentation and skin colour segmentation is then used to select candidate face regions. Finally, a classifier-based face detector is applied only to selected regions to verify the presence of a face (the Viola-Jones detector is used in this paper). The proposed system is evaluated using 640 x 480 pixels test images and compared with other relevant methods. Experimental results show that the proposed method reduces the detection time to 42 ms, where the Viola-Jones detector alone requires 565 ms (on a desktop processor). This improvement makes the face detector suitable for real-time applications. Furthermore, the proposed method requires 50% of the computation time of the best competing method, while reducing the false positive rate by 3.2% and maintaining the same hit rate.
Resumo:
This chapter presents a comparative survey of recent key management (key distribution, discovery, establishment and update) solutions for wireless sensor networks. We consider both distributed and hierarchical sensor network architectures where unicast, multicast and broadcast types of communication take place. Probabilistic, deterministic and hybrid key management solutions are presented, and we determine a set of metrics to quantify their security properties and resource usage such as processing, storage and communication overheads. We provide a taxonomy of solutions, and identify trade-offs in these schemes to conclude that there is no one-size-fits-all solution.
Resumo:
Image representations derived from simplified models of the primary visual cortex (V1), such as HOG and SIFT, elicit good performance in a myriad of visual classification tasks including object recognition/detection, pedestrian detection and facial expression classification. A central question in the vision, learning and neuroscience communities regards why these architectures perform so well. In this paper, we offer a unique perspective to this question by subsuming the role of V1-inspired features directly within a linear support vector machine (SVM). We demonstrate that a specific class of such features in conjunction with a linear SVM can be reinterpreted as inducing a weighted margin on the Kronecker basis expansion of an image. This new viewpoint on the role of V1-inspired features allows us to answer fundamental questions on the uniqueness and redundancies of these features, and offer substantial improvements in terms of computational and storage efficiency.
Resumo:
In the modern connected world, pervasive computing has become reality. Thanks to the ubiquity of mobile computing devices and emerging cloud-based services, the users permanently stay connected to their data. This introduces a slew of new security challenges, including the problem of multi-device key management and single-sign-on architectures. One solution to this problem is the utilization of secure side-channels for authentication, including the visual channel as vicinity proof. However, existing approaches often assume confidentiality of the visual channel, or provide only insufficient means of mitigating a man-in-the-middle attack. In this work, we introduce QR-Auth, a two-step, 2D barcode based authentication scheme for mobile devices which aims specifically at key management and key sharing across devices in a pervasive environment. It requires minimal user interaction and therefore provides better usability than most existing schemes, without compromising its security. We show how our approach fits in existing authorization delegation and one-time-password generation schemes, and that it is resilient to man-in-the-middle attacks.
Resumo:
An optical system which performs the multiplication of binary numbers is described and proof-of-principle experiments are performed. The simultaneous generation of all partial products, optical regrouping of bit products, and optical carry look-ahead addition are novel features of the proposed scheme which takes advantage of the parallel operations capability of optical computers. The proposed processor uses liquid crystal light valves (LCLVs). By space-sharing the LCLVs one such system could function as an array of multipliers. Together with the optical carry look-ahead adders described, this would constitute an optical matrix-vector multiplier.
Resumo:
Supply chain management and customer relationship management are concepts for optimizing the provision of goods to customers. Information sharing and information estimation are key tools used to implement these two concepts. The reduction of delivery times and stock levels can be seen as the main managerial objectives of an integrative supply chain and customer relationship management. To achieve this objective, business processes need to be integrated along the entire supply chain including the end consumer. Information systems form the backbone of any business process integration. The relevant information system architectures are generally well-understood, but the conceptual specification of information systems for business process integration from a management perspective, remains an open methodological problem. To address this problem, we will show how customer relationship management and supply chain management information can be integrated at the conceptual level in order to provide supply chain managers with relevant information. We will further outline how the conceptual management perspective of business process integration can be supported by deriving specifications for enabling information system from business objectives.
Resumo:
Simulation has been widely used to estimate the benefits of Cooperative Systems (CS) based on Inter-Vehicular Communications (IVC). This paper presents a new architecture built with the SiVIC simulator and the RTMaps™ multisensors prototyping platform. We introduce several improvements from a previous similar architecture, regarding IVC modelisation and vehicles’ control. It has been tuned with on-road measurements to improve fidelity. We discuss the results of a freeway emergency braking scenario (EEBL) implemented to validate our architecture’s capabilities.
Resumo:
The aim of this work is to develop software that is capable of back projecting primary fluence images obtained from EPID measurements through phantom and patient geometries in order to calculate 3D dose distributions. In the first instance, we aim to develop a tool for pretreatment verification in IMRT. In our approach, a Geant4 application is used to back project primary fluence values from each EPID pixel towards the source. Each beam is considered to be polyenergetic, with a spectrum obtained from Monte Carlo calculations for the LINAC in question. At each step of the ray tracing process, the energy differential fluence is corrected for attenuation and beam divergence. Subsequently, the TERMA is calculated and accumulated to an energy differential 3D TERMA distribution. This distribution is then convolved with monoenergetic point spread kernels, thus generating energy differential 3D dose distributions. The resulting dose distributions are accumulated to yield the total dose distribution, which can then be used for pre-treatment verification of IMRT plans. Preliminary results were obtained for a test EPID image comprised of 100 9 100 pixels of unity fluence. Back projection of this field into a 30 cm9 30 cm 9 30 cm water phantom was performed, with TERMA distributions obtained in approximately 10 min (running on a single core of a 3 GHz processor). Point spread kernels for monoenergetic photons in water were calculated using a separate Geant4 application. Following convolution and summation, the resulting 3D dose distribution produced familiar build-up and penumbral features. In order to validate the dose model we will use EPID images recorded without any attenuating material in the beam for a number of MLC defined square fields. The dose distributions in water will be calculated and compared to TPS predictions.
Resumo:
We are pleased to present the papers from the Australasian Health Informatics and Knowledge Management (HIKM) conference stream held on 20 January 2011 in Perth as a session of the Australasian Computer Science Week (ASCW) 2011. Formerly HIKM was named Health Data and Knowledge Management, however the inclusion of the health informatics term is timely given the current health reform. The submissions to HIKM 2011 demonstrated that Australasian researchers lead with many research and development innovations coming to fruition. Some of these innovations can be seen here, and we believe further recognition will accomplish by continuation to HIKM in the future. The HIKM conference is a review of health informatics related research, development and education opportunities. The conference papers were written to communicate with other researchers and share research findings, capturing each and every aspect of the health informatics field. They are namely: conceptual models and architectures, privacy and quality of health data, health workflow management patient journey analysis, health information retrieval, analysis and visualisation, data integration/linking, systems for integrated or coordinated care, electronic health records (EHRs) and personally controlled electronic health records (PCEHRs), health data ontologies, and standardisation in health data and clinical applications.
Resumo:
This paper describes the content and delivery of a software internationalisation subject (ITN677) that was developed for Master of Information Technology (MIT) students in the Faculty of Information Technology at Queensland University of Technology. This elective subject introduces students to the strategies, technologies, techniques and current development associated with this growing 'software development for the world' specialty area. Students learn what is involved in planning and managing a software internationalisation project as well as designing, building and using a software internationalisation application. Students also learn about how a software internationalisation project must fit into an over-all product localisation and globalisation that may include culturalisation, tailored system architectures, and reliance upon industry standards. In addition, students are exposed to the different software development techniques used by organizations in this arena and the perils and pitfalls of managing software internationalisation projects.
Resumo:
First-principles computational studies indicate that (B, N, or O)-doped graphene ribbon edges can substantially reduce the energy barrier for H2 dissociative adsorption. The low barrier is competitive with many widely used metal or metal oxide catalysts. This suggests that suitably functionalized graphene architectures are promising metal-free alternatives for low-cost catalytic processes.
Resumo:
The movement of molecules inside living cells is a fundamental feature of biological processes. The ability to both observe and analyse the details of molecular diffusion in vivo at the single-molecule and single-cell level can add significant insight into understanding molecular architectures of diffus- ing molecules and the nanoscale environment in which the molecules diffuse. The tool of choice for monitoring dynamic molecular localization in live cells is fluorescence microscopy, especially so combining total internal reflection fluorescence with the use of fluorescent protein (FP) reporters in offering exceptional imaging contrast for dynamic processes in the cell mem- brane under relatively physiological conditions compared with competing single-molecule techniques. There exist several different complex modes of diffusion, and discriminating these from each other is challenging at the mol- ecular level owing to underlying stochastic behaviour. Analysis is traditionally performed using mean square displacements of tracked particles; however, this generally requires more data points than is typical for single FP tracks owing to photophysical instability. Presented here is a novel approach allowing robust Bayesian ranking of diffusion processes to dis-criminate multiple complex modes probabilistically. It is a computational approach that biologists can use to understand single-molecule features in live cells.