ResearchPad - computer-hardware https://www.researchpad.co Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[Fear and stock price bubbles]]> https://www.researchpad.co/article/elastic_article_13818 I evaluate Alan Greenspan’s claim that stock price bubbles build up in periods of euphoria and tend to burst due to increasing fear. Indeed, there is evidence that e.g. during a crisis, triggered by increasing fear, both qualitative and quantitative measures of risk aversion increase substantially. It is argued that fear is a potential mechanism underlying financial decisions and drives the countercyclical risk aversion. Inspired by this evidence, I construct an euphoria/fear index, which is based on an economic model of time varying risk aversion. Based on US industry returns 1959–2014, my findings suggest that (1) Greenspan is correct in that the price run-up initially occurs in periods of euphoria followed by a crash due to increasing fear; (2) on average already roughly a year before an industry is crashing, euphoria is turning into fear, while the market is still bullish; (3) there is no particular euphoria-fear-pattern for price-runs in industries that do not subsequently crash. I interpret the evidence in favor of Greenspan, who was labeled “Mr. Bubble” by the New York Times, and who was accused to be a serial bubble blower.

]]>
<![CDATA[A deadline constrained scheduling algorithm for cloud computing system based on the driver of dynamic essential path]]> https://www.researchpad.co/article/5c8c195bd5eed0c484b4d4af

To solve the problem of the deadline-constrained task scheduling in the cloud computing system, this paper proposes a deadline-constrained scheduling algorithm for cloud computing based on the driver of dynamic essential path (Deadline-DDEP). According to the changes of the dynamic essential path of each task node in the scheduling process, the dynamic sub-deadline strategy is proposed. The strategy assigns different sub-deadline values to every task node to meet the constraint relations among task nodes and the user’s defined deadline. The strategy fully considers the dynamic sub-deadline affected by the dynamic essential path of task node in the scheduling process. The paper proposed the quality assessment of optimization cost strategy to solve the problem of selecting server for each task node. Based on the sub-deadline urgency and the relative execution cost in the scheduling process, the strategy selects the server that not only meets the sub-deadline but also obtains much lower execution cost. In this way, the proposed algorithm will make the task graph complete within its deadline, and minimize its total execution cost. Finally, we demonstrate the proposed algorithm via the simulation experiments using Matlab tools. The experimental results show that, the proposed algorithm produces remarkable performance improvement rate on the total execution cost that ranges between 10.3% and 30.8% under meeting the deadline constraint. In view of the experimental results, the proposed algorithm provides better-quality scheduling solution that is suitable for scientific application task execution in the cloud computing environment than IC-PCP, DCCP and CD-PCP.

]]>
<![CDATA[A low-cost, autonomous mobile platform for limnological investigations, supported by high-resolution mesoscale airborne imagery]]> https://www.researchpad.co/article/5c6f1526d5eed0c48467ae54

Two complementary measurement systems—built upon an autonomous floating craft and a tethered balloon—for lake research and monitoring are presented. The autonomous vehicle was assembled on a catamaran for stability, and is capable of handling a variety of instrumentation for in situ and near-surface measurements. The catamaran hulls, each equipped with a small electric motor, support rigid decks for arranging equipment. An electric generator provides full autonomy for about 8 h. The modular power supply and instrumentation data management systems are housed in two boxes, which enable rapid setup. Due to legal restrictions in Switzerland (where the craft is routinely used), the platform must be observed from an accompanying boat while in operation. Nevertheless, the control system permits fully autonomous operation, with motion controlled by speed settings and waypoints, as well as obstacle detection. On-board instrumentation is connected to a central hub for data storage, with real-time monitoring of measurements from the accompanying boat. Measurements from the floating platform are complemented by mesoscale imaging from an instrument package attached to a He-filled balloon. The aerial package records thermal and RGB imagery, and transmits it in real-time to a ground station. The balloon can be tethered to the autonomous catamaran or to the accompanying boat. Missions can be modified according to imagery and/or catamaran measurements. Illustrative results showing the surface thermal variations of Lake Geneva demonstrate the versatility of the combined floating platform/balloon imagery system setup for limnological investigations.

]]>
<![CDATA[Remote access protocols for Desktop-as-a-Service solutions]]> https://www.researchpad.co/article/5c390ba8d5eed0c48491db6c

The use of remote desktop services on virtualized machines is a general trend to reduce the cost of desktop seats. Instead of assigning a physical machine with its operating system and software to each user, it is considerably easier to manage a light client machine that connects to a server where the instance of the user’s desktop machine actually executes. Citrix and VMware have been major suppliers of these systems in private clouds. Desktop-as-a-Service solutions such as Amazon WorkSpaces offer a similar functionality, yet in a public cloud environment. In this paper, we review the main offerings of remote desktop protocols for a cloud deployment. We evaluate the necessary network resources using a traffic model based on self-similar processes. We also evaluate the quality of experience perceived by the user, in terms of image quality and interactivity, providing values of Mean Opinion Score (MOS). The results confirm that the type of application running on the remote servers and the mix of users must be considered to determine the bandwidth requirements. Applications such as web browsing result in unexpectedly high traffic rates and long bursts, more than the case of desktop video playing, because the on-page animations are rendered on the server.

]]>
<![CDATA[Validating quantum-classical programming models with tensor network simulations]]> https://www.researchpad.co/article/5c1813c3d5eed0c484775c05

The exploration of hybrid quantum-classical algorithms and programming models on noisy near-term quantum hardware has begun. As hybrid programs scale towards classical intractability, validation and benchmarking are critical to understanding the utility of the hybrid computational model. In this paper, we demonstrate a newly developed quantum circuit simulator based on tensor network theory that enables intermediate-scale verification and validation of hybrid quantum-classical computing frameworks and programming models. We present our tensor-network quantum virtual machine (TNQVM) simulator which stores a multi-qubit wavefunction in a compressed (factorized) form as a matrix product state, thus enabling single-node simulations of larger qubit registers, as compared to brute-force state-vector simulators. Our simulator is designed to be extensible in both the tensor network form and the classical hardware used to run the simulation (multicore, GPU, distributed). The extensibility of the TNQVM simulator with respect to the simulation hardware type is achieved via a pluggable interface for different numerical backends (e.g., ITensor and ExaTENSOR numerical libraries). We demonstrate the utility of our TNQVM quantum circuit simulator through the verification of randomized quantum circuits and the variational quantum eigensolver algorithm, both expressed within the eXtreme-scale ACCelerator (XACC) programming model.

]]>
<![CDATA[qTorch: The quantum tensor contraction handler]]> https://www.researchpad.co/article/5c181399d5eed0c48477553c

Classical simulation of quantum computation is necessary for studying the numerical behavior of quantum algorithms, as there does not yet exist a large viable quantum computer on which to perform numerical tests. Tensor network (TN) contraction is an algorithmic method that can efficiently simulate some quantum circuits, often greatly reducing the computational cost over methods that simulate the full Hilbert space. In this study we implement a tensor network contraction program for simulating quantum circuits using multi-core compute nodes. We show simulation results for the Max-Cut problem on 3- through 7-regular graphs using the quantum approximate optimization algorithm (QAOA), successfully simulating up to 100 qubits. We test two different methods for generating the ordering of tensor index contractions: one is based on the tree decomposition of the line graph, while the other generates ordering using a straight-forward stochastic scheme. Through studying instances of QAOA circuits, we show the expected result that as the treewidth of the quantum circuit’s line graph decreases, TN contraction becomes significantly more efficient than simulating the whole Hilbert space. The results in this work suggest that tensor contraction methods are superior only when simulating Max-Cut/QAOA with graphs of regularities approximately five and below. Insight into this point of equal computational cost helps one determine which simulation method will be more efficient for a given quantum circuit. The stochastic contraction method outperforms the line graph based method only when the time to calculate a reasonable tree decomposition is prohibitively expensive. Finally, we release our software package, qTorch (Quantum TensOR Contraction Handler), intended for general quantum circuit simulation. For a nontrivial subset of these quantum circuits, 50 to 100 qubits can easily be simulated on a single compute node.

]]>
<![CDATA[A high-speed brain-computer interface (BCI) using dry EEG electrodes]]> https://www.researchpad.co/article/5989db4fab0ee8fa60bdbaa5

Recently, brain-computer interfaces (BCIs) based on visual evoked potentials (VEPs) have been shown to achieve remarkable communication speeds. As they use electroencephalography (EEG) as non-invasive method for recording neural signals, the application of gel-based EEG is time-consuming and cumbersome. In order to achieve a more user-friendly system, this work explores the usability of dry EEG electrodes with a VEP-based BCI. While the results show a high variability between subjects, they also show that communication speeds of more than 100 bit/min are possible using dry EEG electrodes. To reduce performance variability and deal with the lower signal-to-noise ratio of the dry EEG electrodes, an averaging method and a dynamic stopping method were introduced to the BCI system. Those changes were shown to improve performance significantly, leading to an average classification accuracy of 76% with an average communication speed of 46 bit/min, which is equivalent to a writing speed of 8.8 error-free letters per minute. Although the BCI system works substantially better with gel-based EEG, dry EEG electrodes are more user-friendly and still allow high-speed BCI communication.

]]>
<![CDATA[Open-Source Syringe Pump Library]]> https://www.researchpad.co/article/5989dae7ab0ee8fa60bbe090

This article explores a new open-source method for developing and manufacturing high-quality scientific equipment suitable for use in virtually any laboratory. A syringe pump was designed using freely available open-source computer aided design (CAD) software and manufactured using an open-source RepRap 3-D printer and readily available parts. The design, bill of materials and assembly instructions are globally available to anyone wishing to use them. Details are provided covering the use of the CAD software and the RepRap 3-D printer. The use of an open-source Rasberry Pi computer as a wireless control device is also illustrated. Performance of the syringe pump was assessed and the methods used for assessment are detailed. The cost of the entire system, including the controller and web-based control interface, is on the order of 5% or less than one would expect to pay for a commercial syringe pump having similar performance. The design should suit the needs of a given research activity requiring a syringe pump including carefully controlled dosing of reagents, pharmaceuticals, and delivery of viscous 3-D printer media among other applications.

]]>
<![CDATA[Computing with networks of nonlinear mechanical oscillators]]> https://www.researchpad.co/article/5989db5cab0ee8fa60be0293

As it is getting increasingly difficult to achieve gains in the density and power efficiency of microelectronic computing devices because of lithographic techniques reaching fundamental physical limits, new approaches are required to maximize the benefits of distributed sensors, micro-robots or smart materials. Biologically-inspired devices, such as artificial neural networks, can process information with a high level of parallelism to efficiently solve difficult problems, even when implemented using conventional microelectronic technologies. We describe a mechanical device, which operates in a manner similar to artificial neural networks, to solve efficiently two difficult benchmark problems (computing the parity of a bit stream, and classifying spoken words). The device consists in a network of masses coupled by linear springs and attached to a substrate by non-linear springs, thus forming a network of anharmonic oscillators. As the masses can directly couple to forces applied on the device, this approach combines sensing and computing functions in a single power-efficient device with compact dimensions.

]]>
<![CDATA[A Novel Camera Calibration Method Based on Polar Coordinate]]> https://www.researchpad.co/article/5989db4cab0ee8fa60bda94e

A novel calibration method based on polar coordinate is proposed. The world coordinates are expressed in the form of polar coordinates, which are converted to world coordinates in the calibration process. In the beginning, the calibration points are obtained in polar coordinates. By transformation between polar coordinates and rectangular coordinates, the points turn into form of rectangular coordinates. Then, the points are matched with the corresponding image coordinates. At last, the parameters are obtained by objective function optimization. By the proposed method, the relationships between objects and cameras are expressed in polar coordinates easily. It is suitable for multi-camera calibration. Cameras can be calibrated with fewer points. The calibration images can be positioned according to the location of cameras. The experiment results demonstrate that the proposed method is an efficient calibration method. By the method, cameras are calibrated conveniently with high accuracy.

]]>
<![CDATA[Exposure Render: An Interactive Photo-Realistic Volume Rendering Framework]]> https://www.researchpad.co/article/5989da51ab0ee8fa60b8dfb3

The field of volume visualization has undergone rapid development during the past years, both due to advances in suitable computing hardware and due to the increasing availability of large volume datasets. Recent work has focused on increasing the visual realism in Direct Volume Rendering (DVR) by integrating a number of visually plausible but often effect-specific rendering techniques, for instance modeling of light occlusion and depth of field. Besides yielding more attractive renderings, especially the more realistic lighting has a positive effect on perceptual tasks. Although these new rendering techniques yield impressive results, they exhibit limitations in terms of their exibility and their performance. Monte Carlo ray tracing (MCRT), coupled with physically based light transport, is the de-facto standard for synthesizing highly realistic images in the graphics domain, although usually not from volumetric data. Due to the stochastic sampling of MCRT algorithms, numerous effects can be achieved in a relatively straight-forward fashion. For this reason, we have developed a practical framework that applies MCRT techniques also to direct volume rendering (DVR). With this work, we demonstrate that a host of realistic effects, including physically based lighting, can be simulated in a generic and flexible fashion, leading to interactive DVR with improved realism. In the hope that this improved approach to DVR will see more use in practice, we have made available our framework under a permissive open source license.

]]>
<![CDATA[eduSPIM: Light Sheet Microscopy in the Museum]]> https://www.researchpad.co/article/5989db49ab0ee8fa60bd9b97

Light Sheet Microscopy in the Museum

Light sheet microscopy (or selective plane illumination microscopy) is an important imaging technique in the life sciences. At the same time, this technique is also ideally suited for community outreach projects, because it produces visually appealing, highly dynamic images of living organisms and its working principle can be understood with basic optics knowledge. Still, the underlying concepts are widely unknown to the non-scientific public. On the occasion of the UNESCO International Year of Light, a technical museum in Dresden, Germany, launched a special, interactive exhibition. We built a fully functional, educational selective plane illumination microscope (eduSPIM) to demonstrate how developments in microscopy promote discoveries in biology.

Design Principles of an Educational Light Sheet Microscope

To maximize educational impact, we radically reduced a standard light sheet microscope to its essential components without compromising functionality and incorporated stringent safety concepts beyond those needed in the lab. Our eduSPIM system features one illumination and one detection path and a sealed sample chamber. We image fixed zebrafish embryos with fluorescent vasculature, because the structure is meaningful to laymen and visualises the optical principles of light sheet microscopy. Via a simplified interface, visitors acquire fluorescence and transmission data simultaneously.

The eduSPIM Design Is Tailored Easily to Fit Numerous Applications

The universal concepts presented here may also apply to other scientific approaches that are communicated to laymen in interactive settings. The specific eduSPIM design is adapted easily for various outreach and teaching activities. eduSPIM may even prove useful for labs needing a simple SPIM. A detailed parts list and schematics to rebuild eduSPIM are provided.

]]>
<![CDATA[Orientation-Based Control of Microfluidics]]> https://www.researchpad.co/article/5989db07ab0ee8fa60bc8a31

Most microfluidic chips utilize off-chip hardware (syringe pumps, computer-controlled solenoid valves, pressure regulators, etc.) to control fluid flow on-chip. This expensive, bulky, and power-consuming hardware severely limits the utility of microfluidic instruments in resource-limited or point-of-care contexts, where the cost, size, and power consumption of the instrument must be limited. In this work, we present a technique for on-chip fluid control that requires no off-chip hardware. We accomplish this by using inert compounds to change the density of one fluid in the chip. If one fluid is made 2% more dense than a second fluid, when the fluids flow together under laminar flow the interface between the fluids quickly reorients to be orthogonal to Earth’s gravitational force. If the channel containing the fluids then splits into two channels, the amount of each fluid flowing into each channel is precisely determined by the angle of the channels relative to gravity. Thus, any fluid can be routed in any direction and mixed in any desired ratio on-chip simply by holding the chip at a certain angle. This approach allows for sophisticated control of on-chip fluids with no off-chip control hardware, significantly reducing the cost of microfluidic instruments in point-of-care or resource-limited settings.

]]>
<![CDATA[Deploying a quantum annealing processor to detect tree cover in aerial imagery of California]]> https://www.researchpad.co/article/5989db4fab0ee8fa60bdbc67

Quantum annealing is an experimental and potentially breakthrough computational technology for handling hard optimization problems, including problems of computer vision. We present a case study in training a production-scale classifier of tree cover in remote sensing imagery, using early-generation quantum annealing hardware built by D-wave Systems, Inc. Beginning within a known boosting framework, we train decision stumps on texture features and vegetation indices extracted from four-band, one-meter-resolution aerial imagery from the state of California. We then impose a regulated quadratic training objective to select an optimal voting subset from among these stumps. The votes of the subset define the classifier. For optimization, the logical variables in the objective function map to quantum bits in the hardware device, while quadratic couplings encode as the strength of physical interactions between the quantum bits. Hardware design limits the number of couplings between these basic physical entities to five or six. To account for this limitation in mapping large problems to the hardware architecture, we propose a truncation and rescaling of the training objective through a trainable metaparameter. The boosting process on our basic 108- and 508-variable problems, thus constituted, returns classifiers that incorporate a diverse range of color- and texture-based metrics and discriminate tree cover with accuracies as high as 92% in validation and 90% on a test scene encompassing the open space preserves and dense suburban build of Mill Valley, CA.

]]>
<![CDATA[Software Framework for Controlling Unsupervised Scientific Instruments]]> https://www.researchpad.co/article/5989da59ab0ee8fa60b8f8bd

Science outreach and communication are gaining more and more importance for conveying the meaning of today’s research to the general public. Public exhibitions of scientific instruments can provide hands-on experience with technical advances and their applications in the life sciences. The software of such devices, however, is oftentimes not appropriate for this purpose. In this study, we describe a software framework and the necessary computer configuration that is well suited for exposing a complex self-built and software-controlled instrument such as a microscope to laymen under limited supervision, e.g. in museums or schools. We identify several aspects that must be met by such software, and we describe a design that can simultaneously be used to control either (i) a fully functional instrument in a robust and fail-safe manner, (ii) an instrument that has low-cost or only partially working hardware attached for illustration purposes or (iii) a completely virtual instrument without hardware attached. We describe how to assess the educational success of such a device, how to monitor its operation and how to facilitate its maintenance. The introduced concepts are illustrated using our software to control eduSPIM, a fluorescent light sheet microscope that we are currently exhibiting in a technical museum.

]]>
<![CDATA[HTC Vive MeVisLab integration via OpenVR for medical applications]]> https://www.researchpad.co/article/5989db50ab0ee8fa60bdc0e0

Virtual Reality, an immersive technology that replicates an environment via computer-simulated reality, gets a lot of attention in the entertainment industry. However, VR has also great potential in other areas, like the medical domain, Examples are intervention planning, training and simulation. This is especially of use in medical operations, where an aesthetic outcome is important, like for facial surgeries. Alas, importing medical data into Virtual Reality devices is not necessarily trivial, in particular, when a direct connection to a proprietary application is desired. Moreover, most researcher do not build their medical applications from scratch, but rather leverage platforms like MeVisLab, MITK, OsiriX or 3D Slicer. These platforms have in common that they use libraries like ITK and VTK, and provide a convenient graphical interface. However, ITK and VTK do not support Virtual Reality directly. In this study, the usage of a Virtual Reality device for medical data under the MeVisLab platform is presented. The OpenVR library is integrated into the MeVisLab platform, allowing a direct and uncomplicated usage of the head mounted display HTC Vive inside the MeVisLab platform. Medical data coming from other MeVisLab modules can directly be connected per drag-and-drop to the Virtual Reality module, rendering the data inside the HTC Vive for immersive virtual reality inspection.

]]>
<![CDATA[A Pipelined Non-Deterministic Finite Automaton-Based String Matching Scheme Using Merged State Transitions in an FPGA]]> https://www.researchpad.co/article/5989daaaab0ee8fa60ba8ead

This paper proposes a pipelined non-deterministic finite automaton (NFA)-based string matching scheme using field programmable gate array (FPGA) implementation. The characteristics of the NFA such as shared common prefixes and no failure transitions are considered in the proposed scheme. In the implementation of the automaton-based string matching using an FPGA, each state transition is implemented with a look-up table (LUT) for the combinational logic circuit between registers. In addition, multiple state transitions between stages can be performed in a pipelined fashion. In this paper, it is proposed that multiple one-to-one state transitions, called merged state transitions, can be performed with an LUT. By cutting down the number of used LUTs for implementing state transitions, the hardware overhead of combinational logic circuits is greatly reduced in the proposed pipelined NFA-based string matching scheme.

]]>
<![CDATA[A mutation in porcine pre-miR-15b alters the biogenesis of MiR-15b\16-1 cluster and strand selection of MiR-15b]]> https://www.researchpad.co/article/5989db5cab0ee8fa60be0004

MicroRNAs (miRNAs) are small non-coding RNAs that are involved in translational regulation of the messenger RNA molecules. Sequence variations in the genes encoding miRNAs could influence their biogenesis and function. MiR-15b plays an important role in cellular proliferation, apoptosis and the cell cycle. Here, we report the identification of a C58T mutation in porcine pre-miR-15b. Through in vitro and in vivo experiments, we determined that this mutation blocks the transition from pri-miRNA to pre-miRNA, alters the strand selection between miR-15b-5p and miR-15b-3p, and obstructs biogenesis of the downstream miR-16-1. These results serve to highlight the importance of miRNA mutations and their impacts on miRNA biogenesis.

]]>
<![CDATA[Experimental Evaluation of Suitability of Selected Multi-Criteria Decision-Making Methods for Large-Scale Agent-Based Simulations]]> https://www.researchpad.co/article/5989da97ab0ee8fa60ba2228

Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the–server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models.

]]>
<![CDATA[Validation of enhanced kinect sensor based motion capturing for gait assessment]]> https://www.researchpad.co/article/5989db53ab0ee8fa60bdcc81

Optical motion capturing systems are expensive and require substantial dedicated space to be set up. On the other hand, they provide unsurpassed accuracy and reliability. In many situations however flexibility is required and the motion capturing system can only temporarily be placed. The Microsoft Kinect v2 sensor is comparatively cheap and with respect to gait analysis promising results have been published. We here present a motion capturing system that is easy to set up, flexible with respect to the sensor locations and delivers high accuracy in gait parameters comparable to a gold standard motion capturing system (VICON). Further, we demonstrate that sensor setups which track the person only from one-side are less accurate and should be replaced by two-sided setups. With respect to commonly analyzed gait parameters, especially step width, our system shows higher agreement with the VICON system than previous reports.

]]>