The release of CrashPlan 4.8 has thrown a considerable monkey wrench in the publication of our past-due procedure for FreeNAS. Additional configuration steps are now required to successfully install the software within a jail. Prior caveats related to unintentional and automatic port reconfiguration continue to occur with the latest release. There are many individuals who have contributed to the fixes and workarounds, and we want to be sure we’re citing everything appropriately. While we’re optimistic that bhyve virtualization in the next release of FreeNAS will simplify processes even further, the potential to stay put on 9.10.x may be a reality for may people who are happy with the stability and functionality that exists today. In other news, our first mini-review is up for the Mediasonic ProRaid HUR5-SU3. It’s a fairly simple and cost-effective solution for reallocating hard drives as externally attached storage, or as a piece of a multi-pronged backup strategy.
The evolution of the multimedia PC at Reztek Systems has taken a few twists throughout 2016. The initial combination of a 29″ LG IPS 21:9 widescreen monitor with support for AMD’s FreeSync solution was paired with an AMD-based video card. While the viewing angles and overall screen real estate were excellent, it lacked that “next generation” or “wow” factor that enables immersive experiences with various interactive entertainment titles. The launch of nVidia’s 2016 Pascal-based products provided a compelling enough boost to make the switch from the display adapter perspective. Although frame rates increased considerably when compared to the outgoing GPU, there was still a disconnect due to tearing in the absence of vertical sync being enabled. Last week, Best Buy had the Dell S2716DG monitor on sale for $480 USD. Current price at the time of this writing is back up to the standard MSRP of $700 USD. Does the combination of higher refresh rates, G-SYNC support, and QHD resolution (2560×1440) outweigh the fidelity and benefits/tricks for workflows that are enabled via an IPS monitor with 21:9 aspect ratio? It depends.
The Dell S2716DG contains additional bells and whistles, such as a 4 port USB 3.0 hub, a headphone output, the ability to rotate for working with developing and modifying bodies of work, and a fairly thin bezel that doesn’t take focus away from the screen. Initial out of the box performance required some manual adjustment, but the end result is perfectly serviceable with solid and deep colors. When the switch is flipped within the display settings to enable the desktop to run at 144Hz, the fluidity of the Windows 10 interface matches the fit and finish experienced within macOS at a lower refresh rate. Launching a quick game of the 2016 DOOM further highlights the benefits of G-SYNC. Fast frame rates (60 frames per second or higher) only go so far when there are visual artifacts that can detract from the overall experience. These anomalies were not experienced with the new monitor; the subtle improvement and enhanced rendering consistency resulted in a more engaging experience. Even when the frame rate varies, it’s far less noticeable than the alternative solutions used by AMD and Intel.
While it does not appear that G-SYNC is going to achieve mass market status due to the dramatic increase in price related to the proprietary nVIDIA circuitry requires to make this technology work, it honestly does provide a better experience for traditional client compute. If you’re part of the Green Team and haven’t gone “all in” for the full display chain, the Dell S2716DG is a solid and reasonably priced value compared to alternative (and more expensive) offerings.
The reveal of the upcoming next-generation Nintendo console via trailer has demonstrated that competition breeds innovation. With only a glimpse of some titles running, the potential of the hardware and flexibility of configuration is certainly a tribute to Satoru Iwata’s vision of the Blue Ocean strategy. As Microsoft and Sony duke it out based upon specifications that follow a cadence similar to the PC-centric x86 architectures used by their offerings, Nintendo’s path targets a much larger audience. The overall approach with the Wii U now feels more like a pilot for where Nintendo was heading all along with the Switch. The strengths, reputation, and support that are still present in the mobile space via the 3DS are amplified by a noticeable boost in visual processing capabilities and the ability to establish multiplayer experiences in person with minimal limitation. The flexibility of control schemes and the nature of use cases highlighted in the trailer demonstrate that Nintendo is aggressively aiming for ALL gaming markets.
Initial comments on Facebook and other forms of social media have reverted to “what are the specs?”. As we get closer to launch, the Switch has the potential to drive an emotional response similar to Apple’s 1984 commercial for the introduction of the Macintosh. Those of us that have fond memories of the original Game Boy with the link cables for a personal and unique multiplayer setup that has fallen by the wayside due to online services providing the competition can get back to what made gaming so special while enabling new friendships to blossom over the experiences. Going out on a limb, a few assertions will be made:
1.) It’s not about the specs once you get to 900p/1080p: Before anyone gets too wrapped up in “but it doesn’t do 4K”, neither do many platforms without significant concessions to frame rate or image quality. The associated displays are still not quite mass market from a living room adoption perspective. Putting a 4K display in a tablet-sized form factor would price the Switch out of reality and would negatively impact to the system’s battery life when it’s not docked. Furthermore, if the use of the nVidia Tegra to power the system is true, the horsepower to provide 4K60 isn’t there. What is there has the potential to provide near-parity to current generation consoles with the bonus of portability.
2.) The flexibility of design will make this the next Wii in terms of sales volumes: It’s capable of supporting core gaming and casual gaming. The detachable controller approach enables specialized controllers to be introduced further down the line to support games and experiences that we have not even considered yet. The redesign of the Nintendo Network into something that is far more usable will provide benefits, rewards, or special offers to entice the “candy crushers” and “panda jammers” to step outside of the smartphone gaming scene. A universal gaming and content consumption platform that excels equally at touch and traditional control schemes makes it far more accessible and reduces any potential learning curves. If the alleged $399 bundle includes the equivalent of a “Wii Sports” pack-in to get people hooked, Microsoft and Sony will need to rethink their approaches beyond VR and cloud-streaming.
3.) It has a headphone jack. That’s sure to please some people.
4.) This is the type of innovation that we normally expect or experience from companies such as Apple, Google, and Microsoft (when they’re really trying with the Surface Book and Hololens). I would not be surprised if, during the Switch’s life cycle, one of the tech titans decides to go all in and attempt to buy Nintendo. There’s a hidden road map starting with the Switch that can lead to the “stickiness” that the major players prefer to have within their ecosystems.
On July 25th, 2016, Verizon made its formal offer to buy Yahoo for $4.8 billion dollars. We had questioned what value may or may not have remained. Although valuations were high due to Yahoo’s stakes in Alibaba and Yahoo Japan, the company’s market share and user base have been dwindling due to superior offerings from competitors that not only work hard to remain relevant, but also continuously innovate. The revolving door at the CEO position starting in 2009 highlights that the focus on shareholders and creating value were at the forefront, while optimizing infrastructure and services was a critical yet neglected component of Yahoo’s systems. As it is known that at least 500 million user accounts were pilfered through the end of 2014, this would be the time to truly take stock of what could have been done to prevent what Symantec claims is the largest known security breach yet.
With continued advancements for readily available compute, minimizing risks around facing a breach require significant investment in technology and processes to keep up with malicious actors (state sponsored or otherwise) needs to be a top priority for any organization that conducts business using an element of the Internet. If a company that once was a powerhouse in search farms out its search bar to the highest bidder, what do they do with the profits? Do they procure intrusion detection and/or prevention solutions, monitoring solutions, and complementary security solutions to the tune of $1.1 billion dollars, or do they buy a blogging site and promise not to screw it up? History has already spoken under the direction of the winner of the most golden of parachutes ever. Do not protect and enhance the security of the user base; buy something that would see a reduction in user engagement if extensive efforts to litter the pages with ads to increase revenue were enforced.
The fallout of this hack is going to take years to play out. While Verizon did well with prior acquisitions of companies and properties that fell out of favor, Yahoo is an anomaly that required a far more sweeping analysis before tendering an offer. If what valuable IP they have remains, Microsoft would have circled back and offered pennies on the dollar for it. While the Verizon buyout total is far less than the $53 billion offered by the folks in Redmond in 2008, it’s still more than should realistically have been offered. From an information security and information technology perspective, this will make for an interesting use case in future studies.
The integration of technology within our lives and homes has given rise to the procurement of devices and solutions that would not be commonplace a decade ago. Security cameras, network-attached storage (NAS), smart doorbells, and connected thermostats are just some of the examples of solutions that have become mainstream and may be found in the wild. For devices that are passively cooled via their housing or integrated heat sinks, utilization is never an issue until the device is no longer usable. For NAS units and switches that they may be connected to, active cooling via fans that are directly connected to the motherboard can contribute to the amount of noise encountered within a given room.
Our previous switch, which was a twenty-four port HP Enterprise V1910, had a 40mm Delta cooling fan installed. The speed and design of this cooling element made it intolerable to remain within close proximity. With a little research, it was determined that the Noctua NF-A4x10 would serve as a viable and quiet replacement for the offending Delta unit. Installation using the Scotch locks was seamless, and it took multiple checks to confirm that the fan was running. This level of silence is what consumers should be able to consistently expect for solutions that may be near living areas where the noise can be grating.
We undertook a comparable exercise this morning with a Ubiquiti Networks UniFi Security Gateway Pro 4 (USG-PRO-4) unit that was sitting at the top of our rack. The default AVC fans that are installed in the unit (DS04020B12L – Quantity of 2) were exceptionally shrill and high-pitched during the pre-rack setup process that was performed to modify the default address. The following thread confirms that we are not alone in experiencing this phenomenon. The initial effort of using the BlackNoise BlackSilent PM-2 fans (3800 RPM) to replace the offending AVC units (~4500 RPM) was a fifteen minute exercise that provided results that exceeded expectations. High-pitched whines are eliminated without compromising cooling capability. The “loudest noise” in the room has transitioned from the USG-PRO-4 to the pair of UniFi US-24-250W switches. As each switch contains two equally offensive Delta fans, the procedure will be worthwhile to undertake.
In the consumer space, the cost differential between a sub-par fan and a noise optimized fan equates to a few dollars. It would be far superior for manufacturers to account for outlying applications of their products and incorporate an improved cooling solution at the factory. The ability of large organizations with massive and diverse supply chains to obtain an upgraded fan at a reduced cost is a reality. The ability for individuals to procure the same cooling product at a reasonable price is fairly slim. Depending on vendor, replacement solutions from manufacturers such as Noctua or BlackNoise fetch a price of approximately $15 USD. Multiply this cost by the total number of fans per device and reducing the noise pollution can quickly become an expensive proposition. Worse yet, if a competing product with equivalent functionality and a superior cooling solution ends up being less expensive than the desired product plus fees to replace existing fans, then revenue may be lost.
Since our last post, there have been a number of developments from a number of vendors that have or will soon unleash new products that fulfill competitive gaps. Many exciting developments have been unleashed by AMD throughout this year. The introduction of their RX-series GPUs have ushered in very affordable performance for a broad demographic. The top end of the stack addressed moderate visualization needs at 1080p and 1440p resolutions for a reasonable cost and viable power utilization model. While the top-end RX480 did suffer from power draw issues that exceeded specifications for the slot or the 6-pin power, a fix was delivered via software.
After releasing the multiple solutions in the consumer space and teasing the future in the professional space via the Radeon Pro SSG, AMD upped the ante by unleashing a controlled demonstration that placed its upcoming Zen solution against Intel’s high-end desktop platform. Although there were many variables in place to make the comparison as “apples to apples” as possible, there are still some unknowns as to how the final retail product will be configured and perform. With Intel launching their Kaby Lake-based processors later this year, Zen needs to hit the market in a more aggressive manner to improve the market for consumers. If performance of Zen remains within a stone’s throw of Intel’s portfolio at a non-trivial cost savings, it may provide enough value to incur growth in the PC market and force Intel to compete with comparable core counts to justify an entirely new upgrade cycle.
In the area of storage, the major players in the traditional hard drive space have ten terabyte offerings available. Seagate’s showing in this space has garnered some positive impressions. While Seagate’s reputation within various communities is less than favorable due to repeated mishaps and cut corners that negatively impact the perception of the quality of their product, the performance of such a large drive is impressive. Is there a valid use case for this much capacity within a home setting? Perhaps. If a household is storing RAW images captured via a DSLR camera, a pair of these drives in a NAS or a PC may fit the bill and provide room for growth. Backing up that much data off premise via a cloud provider may present its own challenges with respect to bandwidth caps that are enforced by ISPs.
While we take broadband for granted in this day and age, the ability to share information comes with an up-front cost and a hidden cost. Most traditional providers offer you fixed speeds for download and upload, and may charge you extra for higher tiers of service and a cable modem rental. The latter cost is one that can be mitigated by procuring your own modem using a list of equipment that has been approved or certified by the ISP. The former is a cost that can be discounted if you opt to obtain additional services such as television or a digital phone line. Our point of contention in this model pertains to the tertiary charge associated with data caps and cramming for services that may not be provided at all.
In the early days of cable broadband, where three megabits per second or more was revolutionary, the pipe was an unlimited well from which all subscribers could drink. As speeds became faster, the cable providers felt inclined to boost profits by instantiating per-user data caps. For peer to peer file sharing applications, the ability to abuse the terms of service via legitimate and questionable data transfers provided the inspiration for these companies to develop standard-issue boilerplate pertaining to the quality of the network and costs that are passed on to everyone else. If you don’t watch your bill like a hawk, you may find erroneous charges for converter boxes that you don’t own, premium services that cannot be provided, outrageous charges for utilization that did not happen, or any other deceitful method that allows the extraction of hard earned money with no tangible or realistic product.
The threats to these legacy stalwarts come in many forms, and the answers to maintain levels of revenue are not consumer friendly in the least. The implementation of data caps for residential services stifles the ability for competition to truly innovate and provides ample proof that the current delivery model is broken. The amount of variance between data consumption rates and online activities for households of different sizes will correlate to the number of connected devices that exist. In a single-person household, it would be realistic to assume a minimum of a traditional compute device, a smart device, and potentially a smart television, game console, or media player with associated apps that provide the means to consume over the top (OTT) services such as Netflix, Amazon Instant Video, or Hulu. In the current model of 1080p streams, a cap of one terabyte per month may be more than sufficient for most use cases when accounting for binge watching new seasons of popular shows, sharing content with friends and family, communicating via service or application of choice, and getting every cellular device on WiFi to stay under those data caps.
What if that cap was less than a terabyte? What if you’ve adopted cloud-enabled services for data protection and storage? What happens when you have three independent streams from OTT services running simultaneously without restricting data consumption or video quality? What happens is the racket being instituted by Armstrong Communications. Pay exorbitant amounts of money for a high speed cable modem service that nets you a fairly generous bandwidth allocation and a three hundred gigabyte data cap. No, that’s not a typo! In a situation where the video game console of choice offers you three free games with your annual subscription, downloading said titles in a reasonable amount of time effectively destroys your data balance. You’ll reach your cap well before the billing cycle ends.
If you want more room under the cap, you have to take the full bundle. More than doubling your bill will provide you with some of the worst channel bundle configurations under the sun, with many lacking a high-definition feed. You’ll also get your standard issue voice over IP service running through the modem, and your cap goes up by two hundred gigabytes per month. Is five hundred gigabytes enough for larger households when your download rate eclipses two hundred megabits per second? Survey says…. No! It’s only going to get worse as consumers eventually transition to ultra high definition television sets. Your variable five megabit (or more) per second stream will quintuple. An hour long show being streamed at a rate of approximately twenty five megabits per second ensures you’ll consume over eleven gigabytes of your cap in one shot. After forty-five hours of shows, it’s time to pay the piper again with additional charges in increments of fifty gigabyte allotments. Keep going over consistently? Use the option to buy additional capacity up front. Somehow manage to stay under the cap after paying extra? Good for you, and thanks for the free money!
If taking your money wasn’t enough, said entity will also leverage the infamous practice of cramming an extra sixty dollars every year with their “Zoomshare” offering. Previously, the description of this service pertains to photo sharing, akin to the myriad of photo sharing sites that were… what’s the word we’re looking for…. free. This charge was revised to state that the service consists of an ISP-provided WiFi router for sharing the Internet connection. Comcast, Verizon, and other providers do offer the true “all in one” modem/WiFi router/VoIP combination unit with battery backup. In those cases, you’re renting the device from the ISP. This secondary cramming scam does not actually provide any equipment. The charge just shows up. Calling customer service and explaining that the ISP provided no such device results in a plea of ignorance and refusal to do anything. How does one return an imaginary object that they neither own nor possess?
In areas where a given provider has a monopoly, the alternative options won’t be feasible. The latency of satellite Internet access presents its own challenges for applications that have real-time implications. Getting a competitor to run service to your residence may incur a charge that would let you put a very substantial down payment on a new house. The paltry data caps for tethering one’s phone and attempting to use it as the central source of Internet connectivity won’t work either due to restrictions on performing large updates for mobile devices when one is not connected to WiFi.
These archaic tactics only protect the interest of the incumbent cable company. The variety and maturity of OTT services can meet the needs of the majority of individuals. The local broadcasters, in an effort to maintain the ability to generate advertising revenue, will broadcast one or more streams to your house for the cost of an antenna. If you have a cell phone, do you really need a land line? If you prefer to have a land line available for emergencies, do you really need to pay thirty dollars a month (or more) when you can get equivalent service using the same VoIP technologies for pennies on the dollar?
Excuses related to the ISPs spending money upgrading their network hold little truth. There are still plenty of DOCSIS 1.x modems out in the wild, potentially providing less speed than an individual is paying for. Does the ISP actively inventory and manage the life cycle of their on-premise equipment to improve the health and capabilities of connectivity for customers? No! Does the ISP gradually increase bandwidth allocations to loyal customers at no additional cost? No! Does the ISP provide any type of assistance to families that are optimizing their budget and requiring the removal or downgrade of some services in effort to put food on the table? Hell no! They’ll sell you the same bandwidth and fewer channels for the same money.
Disruptive solutions such as Google Fiber hold promise. People clamor for the speeds and capabilities of such connections. The fine folks at Alphabet are willing to make the necessary investments to bring fiber to the premises. Municipalities are welcoming the investment and modernization of infrastructure. Guess who’s stopping them in markets where they plan to deploy? The incumbents. The same incumbents that gouge and limit choice. The implications of destroying the monopoly or oligopoly are tremendously pro-consumer. Competition in this space is long past overdue.
The last few weeks have provided much insight to the future of performance-oriented compute, graphics, and the benefits of competition. The current gaming flagships of the nVidia GeForce GTX 1080 and GTX 1070 have been evaluated and appear to rule the high end roost for the remainder of 2016. Instead of immediately launching an alternative, AMD has taken the tact of producing the reasonably priced Radeon RX480. This offering, which is based on the Polaris 10 architecture, provides equivalent performance to cards that cost more, both from the cash and operational or power perspectives.
These competing solutions tackles different ends of the market, and provide a much superior price-to-performance ratio than the outgoing generation of GPUs. First and foremost, the adapters that have hit the market or will become available shortly have superior power profiles. AMD’s offering will effectively mitigate the most considerable issues that have persisted with the iterative branding strategy – power consumption and heat generation. The choice to boost clocks and memory capacities prior to the release of the Fury line in Q3 2015 resulted in add-in cards that could easily double as a space heater and a profit center for the utilities. If the RX480 delivers the claimed performance using a 150W TDP profile, it will open up many possibilities for promoting the adoption of VR. When the GPU is capable and costs 50-60% less, funds that normally get allocated to this building block can be reallocated toward the necessary headset hardware.
nVidia’s approach differs in the fact that they were gunning for pole position with the demonstrated performance of the new GTX wonder twins. The memory snafu that tainted the perception of the GTX 970 has not persisted with the GTX 1070. Both cards beat the previous generation’s high-end offerings at a fraction of the price. While this approach doesn’t enable the potential extension of VR to the masses, the longevity and viability of these solutions is worth the price of admission. Regardless if you prefer Team Green or Team Red, the consumer wins. This is the benefit of having healthy competition and understanding the end markets that are being serviced.
Intel, on the other hand, missed the mark with their latest HEDT processors from our perspective. These Broadwell-based offerings feature something close to parity for the three lower tiers (i7-6800K, i7-6850K, and i7-6900L) from the price and performance perspective. The fourth entry in this line has us baffled, as it’s priced well out of the reach of most sane consumers. The i7-6950X is the first 10-core CPU released on the Socket 2011-3 platform, and can be yours for the low price of $1723. For one mortgage payment, or a few months of car payments, you can own a CPU that crushes on multi-threaded benchmarks, yet falls behind when single-threaded performance matters.
The saving grace to offset this overpriced HEDT processor comes from the release of the Xeon E3-1500 v5 series. Quick summary: Iris Pro GPU sharing, awesome for video transcoding/encoding/VDI applications, and you can’t buy it in a socket-based format. We’re using a Xeon E3-1200 v5-based server as part of our local infrastructure, and have been pleased with the performance and memory limits that come with this platform. Making the E3-1500 v5 series a drop-in replacement for LGA 1151-based platforms would open up some mid-cycle refreshes or enable organizations to introduce these capabilities using hardware platforms that they’re already using, have certified, and can support.
When you combine bad pricing with inappropriate decisions that either gouge the prospective customer base or prevent the adoption of Iris Pro graphics for newer, focused use cases, the outcome will not be favorable. Factoring in the announced layoff of 12,000 workers, the elimination of smartphone SoCs, and the replacement of the tick-tock cadence with process-architecture-optimization, a window of opportunity has presented itself for AMD to get back on track with the release of their Zen-based processors. The lack of competition from AMD since 2011 has placed Intel in the position it currently holds. We desperately need “the next big thing” from both of these companies in the CPU space, and it can’t get here fast enough.
Ubiquiti Networks has announced its Amplifi line of routers and wireless extenders in an effort to actively participate in the consumer wireless and routing space. Our experience with their existing, enterprise-focused wireless access points was very impressive. The solution was not only easy to set up, but enabled granular control that is missing from vendors whose products grace the shelves of brick and mortar retailers. The deterrents for using the existing products in a home environment stem from the fact that the UniFi lineup consists of access points. To incorporate them as the only solution that blankets a residence with solid WiFi coverage, one also needs to account for a router that may also provide DHCP services. Our implementation of a UAP-LR model in tandem with a pfSense firewall was reliable and handled a high concentration of wireless connections without requiring reboots. The Amplifi offering takes the complexity out of establishing the necessary provisions to support a wireless network.
Pricing that was provided for the ac-capable solution is very reasonable considering the feature set and proposed configurations. Ubiquiti does a fantastic job of updating their code base and firmware regularly, which ensures this type of solution will remain viable and supported for years to come. The base station and extenders are designed in a manner that they won’t necessarily need to be hidden in most homes. This stands in stark contrast to some of the overpriced monstrosities that have recently graced the market. The popularity and success of solutions such as Google OnHub (via products produced in partnership with ASUS and TP-Link), the Synology RT1900AC WiFi Router and eero demonstrate that consumers are fed up with the sub par performance, reliability, and coverage offered by the overabundance of products that are missing the mark.
In our previous post, we were awaiting the completion of burn-in testing for the FreeNAS solution that we had constructed. Approximately 48 hours was required to complete the recommended short SMART test, the badblocks test, and the long SMART test on all four drives. With the firmware fully updated for the motherboard, out of band management controller, and the disks, we were off to the races. Initial tests and performance numbers were very impressive, but we encountered some unexpected issues and challenges within the first week of operation.
1.) A single early life failure on one of the four Seagate Constellation ES.3 drives. The beauty of taking a few minutes to set up e-mail alerts resulted in immediate notification when the drive failed and the pool entered a degraded state. As we were within the 30 day return period, we RMA’ed the drive and awaited receipt of the replacement unit. The new drive was definitely from a different batch and had the most recent firmware. Validation was successful and the fourth drive was added as a replacement for the first pool.
2.) There’s a CrashPlan plug-in available for FreeNAS. If you take that statement at face value, it wouldn’t appear any different than comparable, community-developed offerings for competing, commercial off-the-shelf solutions. However, the underlying story is far more complex and involves sifting through multiple processes to achieve a fully updated and operational jail with the current version of CrashPlan. Many of the posts we found were helpful and fairly accurate, but some commands and file names have changed over time. We have been performing off-site backups now for close to a week without issue, and reboots of the NAS to apply updates have not impeded operation of the backup solution in any way. The jail is rock solid and the services do not encounter issues when starting.
Having had enough time with the solution under our belt to formulate a solid opinion, the pros far outweigh the cons. The scheduling of standard maintenance tasks is very robust, development and updates come at a very fast pace, and the community – while terse if you didn’t take the time to RTFM – has enough experience to help resolve any issues that you may encounter. The notifications that can be e-mailed as soon as a non-standard event is detected go a long way in enabling the ability to proactively solve issues before they have a chance to take away the underlying data. The reporting mechanisms are very robust and easy to understand. Advanced networking functions have enabled multi-VLAN support and facilitated the appropriate path configuration for iSCSI storage without considerable effort, which is a significant benefit for our virtualized environment.
The fixes we’ve witnessed while researching the list of updates have promptly addressed errata for configurations that are even more extreme than ours. We’ve actually compiled yet another process that we’ll publish and share related to all things CrashPlan within a FreeNAS jail. The pending release of CrashPlan 4.7 in the near future will result in us taking a pause prior to publishing this body of work so that we can validate paths and file names for the newest packages. This will ensure the procedure will be current and valid for the foreseeable future. We’ve even uncovered some additional “gotchas” that occur when you take over an old CrashPlan backup set that had a different configuration (disk layout, IP address, etc.). Stay tuned!