Value-Based Care and Industry Consolidation Driving Demand for Vendor Neutral Archives

first_imgAfter investing in health information technology for several years, the healthcare industry has found itself mired in digital data today, and in the years to come.Indeed, in 2016, IDC, in collaboration with Dell EMC, projected that healthcare stakeholders will produce 2,314 exabytes of data by 2020, a significant increase over the 153 exabytes generated in 2013.This data growth comes during a time of major transformation around both the delivery of healthcare services, and the way that providers are reimbursed. In the value-based care environment, where payment is tied to clinical efficiency and patient outcomes, healthcare data fragmentation is problematic. Clinicians need access to more data sources and analytics to generate insights and determine the most efficacious treatment for their patients.The challenge of this evolving industry is that today’s health IT infrastructures were not architected and deployed in a way that streamlines data sharing even within a single institution.Until recently, healthcare organizations deployed diagnostics tools to meet the needs of individual departments. These isolated projects included localized storage infrastructure, leading to the creation of a new data silo with each additional deployment.  This approach subsequently complicated the task of compiling a complete digital picture of a patient’s health from disparate information sources.The continued rise in hospital mergers and acquisitions adds further complexity, as healthcare IT systems undergo consolidation. Pressure to better manage costs and significantly improve the patient experience has led providers towards consolidation, but it has not always been easy for merging organizations to synthesize their data along with their administrative operations.Siloed Infrastructure Unable to Provide a 360-Degree Patient View to CliniciansTraditional IT infrastructure – and in particular, storage architectures supporting existing and new modalities – represent a significant roadblock for providers seeking an integrated workflow across departments.Legacy workflows, infrastructures, and storage architectures are not designed to support a 360-degree view of the patient, nor can they handle the accelerated growth of medical imaging data that will eventually feed machine learning and artificial intelligence models geared towards providing clinical decision support.Historically, if you had three PACS, a physician wanting to look at a patient’s images across all systems would technically have to open three different viewers, log in three different times and search for the patient three different ways. Then the physician would need to manually look at and process the images, and assemble them in their head.VNA’s Ensure Reliable Access to the Right Data at the Right TimeFortunately, a solution to healthcare workflow integration for medical imaging does exist in the form of the vendor-neutral archive (VNA). A storage infrastructure that does not require a redesign every time an organization adds new data sources or makes workflow adjustments can significantly improve efficiency and IT agility, offering enhanced insights and more reliable access to the right data at the right time.Migrating these files to new storage systems during an architecture upgrade, for example, can be a complicated project. Most organizations undertake this type of periodic refresh process every three to five years to prevent hardware failures and upgrade infrastructure capabilities. As organizations generate and store more medical imaging data, the project can get more complex and costly each time.A VNA can prevent data gaps by managing all updates to DICOM files and pointers, drastically reducing the burdens and costs of this critical process.A VNA also allows a healthcare organization to integrate viewing capabilities and storage with other health IT solutions regardless of its specific PACS application vendor, and its automated data reconciliation capabilities will result in less time spent on ensuring that healthcare providers are able to retrieve the data they need to make informed decisions.Ultimately, provider organizations should seek to create future-proof infrastructure that is flexible enough to support a broad range of anticipated performance demands, including advanced data analytics, expansion into private, hybrid, or public clouds, and constantly changing clinical workflows.The VNA is a foundational component of a healthcare ecosystem predicated on efficiency and quality. The challenge remains making preparations for a VNA deployment and choosing the right strategy for the successful launch of a new system.To achieve these goals, organizations may wish to partner with infrastructure development vendors who can help them to scale their architecture without downtime and consolidate without detracting from day-to-day performance while reducing or eliminating the burdens of future migrations.Value-based care and provider consolidation are driving healthcare organizations to reevaluate the status of their current resources, especially health information. While some health systems and hospitals have the financial capital for a VNA deployment, others may have to consider a phased approach. In either case, a business imperative is driving medical imaging integration with other health IT systems to ensure that physicians are making care decisions based on the most pertinent, complete, and timely patient data.For more information on Dell EMC’s Vendor Neutral Archiving solution, download our white paper here.last_img read more

Continue reading

Where Were You When Artificial Intelligence Transformed the Enterprise?

first_imgWhere were you when artificial intelligence (AI) came online? Remember that science fiction movie where AI takes over in a near dystopian future? The plot revolves around a crazy scientist who accidentally put AI online, only to realize the mistake too late. Soon the machines became the human’s overlords. While these science fiction scenarios are entertaining, they really just stoke fear and add to the confusion to AI. What enterprises should be worried about regarding AI, is understanding how their competition is embracing it to get a leg up.Where were you when your competition put Artificial Intelligence online?Artificial Intelligence in the EnterpriseImplementations of artificial intelligence with Natural Language Processing is changing the way enterprises interact with customers and conduct customer calls. Organizations are also embracing another form artificial intelligence called computer vision that is changing the way Doctors read MRIs and the transportation industry. It’s clear that artificial intelligence and deep learning are making an impact in the enterprise. If you are feeling behind, no problem, let’s walk through strategies enterprises are embracing for implementing AI in their organizations.Key Strategies for Enterprise AIThe first step to embracing AI into your organization is to define an AI strategy. Jack Welch said it best “In reality, strategy is actually very straightforward. You pick a general direction and implement like hell.”  Designing a strategy starts with understanding the business value that AI will bring into the enterprise. For example, a hospital might have an AI initiative to reduce the time necessary to recognize patients experiencing a stroke from CT scans. Reducing that time by minutes or hours could help get critical care to patients and ultimately deliver better patient outcomes. By narrowing and defining a strategy, Data Scientists and Data Engineers now have a goal to focus on achieving.Once you have a strategy in mind, the most important factor in the success of artificial intelligence projects is the data. Successful AI models cannot be built without it. Data is an organizations number one competitive advantage. In fact, AI and deep learning love big data. An artificial intelligence model that helps detect Parkinson’s disease must be trained with considerable amounts of data. If data is the most critical factor, then architecting proper data pipelines is paramount. Enterprises must embrace scaled out architectures that break down data silos and provide flexibility to expand based on the performance needs of the workload. Only with scale-out architectures can Data Engineers help unlock the potential in data.After ensuring data pipelines are architected with a scale-out solution, it is time to fail quickly. YES! Data Scientists and Data Engineers have permission to fail but in a smart fashion. Successful Data Science teams embracing AI have learned how to fail quickly. Leveraging GPU processing allows Data Scientists to build AI models faster than anytime in human history. To speed up the development process though failures, solutions should incorporate GPUs or accelerated compute. Not every model will end with success, but will lead Data Scientists closer to the solution. Ever watch a small child when they are first learning how to walk? Learning to walk is a natural practice of trial and error. If the child waits until he/she has all the information and the perfect environment, they may never learn to walk. However, that child doesn’t learn to walk on a balance beam, it starts in a controlled environment where she can fail. A Data Science team’s start in AI should take the same approach, embracing trial and error while capturing data from failures and successes to iterate into the next cycle quickly.Dell Technologies AI Ready The journey may seem overwhelming. However, those forward-thinking enterprises who take on the challenge in AI will gain market share. Dell Technologies is perfectly placed to guide customers through their AI journey with services to help with an Artificial Intelligence strategy, to the industry leading AI solutions like the Dell EMC Ready Solutions for AI and Reference Architectures for AI. These AI solutions give you informed choice and flexibility on how you deliver NVIDIA GPU accelerated compute complemented by Dell EMC Isilon’s high performance, high bandwidth scale-out storage solution which simplifies data management for training the most complex deep learning models.Click to watch my coffee conversation with SophiaLearn More About AI Ready to dive into how to architect solutions for AI and DL workflows? Learn more by watching my coffee interview (above) with Sophia the Robot and attending the our Magic of AI Webinar on Tuesday June 11th at 11:00 AM CST. During the Webinar Sophia the Robot and I will speak about the proliferation of Deep Learning and what the future hold for Artificial Intelligence. Be sure to tune in to learn more about AI about embracing AI in the Enterprise.last_img read more

Continue reading

To the Edge and Beyond: What Does a Programmable Fabric Look Like?

first_imgIn the first blog in this series we talked about programmable fabrics and their use causes. In this blog we’ll look at what a programmable fabric actually looks like.The following diagram shows the high-level architecture of a programmable fabric:The programmable fabric can be broken down into two main layers, the control plane and the data plane.Control Plane LayerThe control plane layer is responsible for configuring and managing the data plane and is normally more centrally located, i.e., one per PoP or region.The control plane is normally divided into three separate domains – Fabric, Telemetry & Configuration and Management –  to allow them to scale independently. However, they could be implemented in one software controller, for example in a small-scale implementation.1.     Fabric ControllerThe Fabric Controller controls the loading and programming of the data plane pipeline using the P4 Runtime interface to communicate with the data plane’s programmable forwarding engine as shown in the diagram below.There will be a number of controller applications or “network functions” that talk to the fabric controller to control various aspects of the programmable fabric.The Fabric Management applications manage the underlying network fabric setup and configuration. It can also be thought of as a number of virtualized switch and router network functions that provide the underlying network fabric using the programmable fabric.The Fabric Management applications rely on user plane functionality being implemented in the P4 pipeline in the PFE.The NF control plane uses a CUPS (Control User Plane Separation) methodology to implement the control plane portion of a Network Function while the user plane functions are pushed down into the “data plane node” as described in this document.2.      Telemetry ControllerThe Telemetry Controller allows applications (i.e. Fault Management) to collect telemetry on the network elements in the programmable fabric using the Programmable Fabric’s gNMI streaming interface. It is expected that other applications will use things like machine learning to provide more intelligent decisions and provide control loop feedback into the Fabric Controller applications to provide pre-emptive service reconfiguration and repair as we move towards autonomous networks.3.      Configuration and Management ControllerThe Configuration and Management Controller will provide applications with common north bound interfaces and models for the configuration and management of the programmable fabric.The OpenConfig group  provides a set of network data models that allow network functions to be managed using a common set of tools and protocols.  The gNMI and gNOI interfaces use the OpenConfig models to allow efficient access to configure and manage the network functions in the Programmable Fabric.Data Plane LayerThe data plane does the bulk of the network traffic forwarding only sending exception or control packets up to the control plane for processing (i.e. DHCP for a new IPoE session in a BNG-c).  While the data plane might normally be thought of as a standalone network switch in the network it could also be a SmartNIC in a compute server that allows the programmable fabric to be extended up into the server (i.e. using P4 to define a pipeline in an FPGA SmartNIC).The data plane is normally made up of several components:Data Plane Node (DPN):is used to describe the hardware that houses the data plane forwarding function (i.e. all the components below).  This could be a stand alone network switch with a PFE like Intel/Barefoot’s Tofino chip or a compute server with a P4 based SmartNIC like Intel’s PAC N3000.Data Plane Agent (DP-Agent):provides the standardised north bound data plane interfaces (i.e. P4 Runtime, gNMI and gNOI) that allow the control plane network functions to communicate with the data plane.  An example implementation of the DP-Agent is the ONF’s Stratum project.Network Function user plane (NF-u):the user plane portions of network functions can be defined in the programmable pipeline (i.e. using P4 for example) and then loaded into the PFE to process packets.  These functions are programmed by their control plane counters parts (i.e. BNG-c, UPF-c, Fabric Manager-c) in order to handle the bulk of the traffic in the PFE without needing to go up to the control plane for processing.Programmable Forwarding Engine (PFE):the actual hardware that does the packet forwarding. Some examples of a PFE could be the P4 based switch chipset like Intel/Barefoot’s Tofino chipset, or another could be an FPGA based SmartNIC using P4 to define the packet forwarding pipeline.Dell Technologies is committed to driving disaggregation and innovation through open architectures and the competitiveness this brings to our customer’s networks. The high-level architecture described in this blog is in line with the Open Networking Forum’s Stratum and NG-SDN projects and provides open building blocks that allow telecommunication providers to build open, scalable and cost effective edge solutions.last_img read more

Continue reading