|
Component |
2017 and image core improvements (as of 1/2022)
|
2022
|
|
Drive speed |
Hard disk drive HDD 100 MB/sec = 0.1 GB/sec
SATA-6 solid state drive 600 MB/sec = 0.6 GB/sec
NVMe solid state drive 3,000 MB/sec = 3 GB/sec (PCIe gen3)
* NVMe PCIe (gen3) NVMe array ~16 GB/sec (Highpoint)
Image Core: NVMe PCIe3 arrays, ~3 GB/sec, 4 TB capacity, main acquisition PC's (2 confocals, 1 widefield) & GM's desktop PC.
|
NVMe PCIe gen 3 single drive 3 GB/sec
NVMe PCIe gen 4 single drive 7 GB/sec
NVMe PCIe gen 5 single drive 13 GB/sec
* NVMe PCIe (gen3) NVMe array ~16 GB/sec (Highpoint)
* NVMe PCIe (gen4) NVMe array ~32 GB/sec (Highpoint)
* NVMe PCIe (gen4) NVMe array ~40 GB/sec dual PCIe cards (Highpoint)
https://www.amazon.com/HighPoint-Technologies-SSD7540-8-Port-Controller/dp/B08LP2HTX3
future PCIe gen5 array ... ~64 GB/sec
**
20220428H - GRAID dual width PCIe4 RAID NVMe array controller for up to 32 NVMe drives
GRAID SupremeRAID PCIe3 controller (32 NVMe drives)
GRAID Technology 20220501 launch - SupremeRAID SR-1010 holds up to 32 NVMe drives Windows Server 2019 or 2022 or Linux
https://www.graidtech.com/supremeraid-sr-1010
https://www.gigabyte.com/Article/gigabyte-server-and-graid-supremeraid%E2%84%A2 (this is use of earlier PCIe3 model)
https://www.tomshardware.com/news/gpu-powered-raid-110-gbps-19-million-iops
110 Gbps = 13.75 GBps
22 Gbps = 2.75 GBps
PCIe4 is 32 GBps
The SupremeRAID SR-1010 features a dual-slot design and measures 2.713 x 6.6 inches (height x length).
PCIe 4.0 interface (takes up 2 PCIe slots). The maximum power consumption of 70W is only 20W higher than its predecessor. The SupremeRAID SR-1010 admits RAID 0, 1, 5, 6, and 10 arrays like the previous model. The card manages up to 32 directly attached NVMe SSDs and supports the most popular Linux distributions and Windows Server 2019 and 2022.
The SupremeRAID SR-1010 will be available starting May 1 through GRAID Technology's authorized resellers and OEM partners. The card's pricing is unknown.
**
20220428Thur some NVMe prices
Samsung 980 Pro NVMe https://www.amazon.com/SAMSUNG-Internal-Gaming-MZ-V8P2T0B-AM/dp/B08RK2SR23
1TB ... $110
2TB ... $290
Sabrent Rocket 4 Plus NVMe PCIe4 https://www.amazon.com/4TB-SSD-Heatsink-PS5-SB-RKT4P-PSHS-4TB/dp/B09G2P4PYP
1TB ... $160
2TB ... $310
4TB ... $710
32 * 4 TB = 128 TB, $22,720 (for 32 drives, not include SR-1010 or E-ATX motherboard, big power supply etc).
|
|
|
Drive capacity |
single HDD drive ~10 Teraflops ... RAID array could be big. RAID also good to scale speed (ex.8 drives at 100 MB/sec enables ~800 MB/sec).
In practice, a single fluorescent microscope might acquire 1 Terabyte/year (ex: our confocal microscopes with ~1000 hours use, i.e. 1 GB/hour) ... and we now routinely have users upload their data to their JH OneDrive (each JHU staff and student "gets" 5 Terabytes Microsoft OneDrive capacity each, can get morespace).
Image Core: NVMe PCIe3 arrays, ~3 GB/sec, 4 TB capacity, main acquisition PC's (2 confocals, 1 widefield) & GM's desktop PC.
File server: 40 TB HDD RAID array (with 10 Gbe Ethernet on backplane, inspiring 10 Gbe network throughout core) - created by John gibas, GM's predecessor as image core manager.
|
Capacitry is not really a problem - more of 'speed, within budget" is good. |
|
motherboard
and case
|
E-ATX
and big case
Image core: all ATX or smaller.
|
E-ATX ... figure $1000 for PCIe gen5 motherboard
and big case
|
|
Power supply |
?
Image core: mostly whatever was in PC chassis on purchase.
|
1000 Watt is likely going to be needed |
|
CPU |
various Intel (Zeon etc) or AMD (ZEN2) --> AMD ZEN3 (newer PCs) ... in 2022 ZEN4, ZEN5 in play and Intel now offers "Alder Lake" CPUs across very broad feature : prices range.
|
Intel launched new CPUs late 2021 and at CES 1/2022.
AMD launch(ing) ZEN4 CPUs "winter/spring 2022", PCIe5 (again, lots of PCIe lanes addressed directly by CPU, no "bridge" chip).
The new 2022 CPUs are matched to new faster RAM.("memory lanes").
xx
PCIe lanes controlled by CPU vs bridge chip
GM has not seen clear documentation on whether CPU "direct" lanes access is better than through "bridge" chip. PCIe lanes and "memory lanes" both important. Historically, Intel consumer CPUs max'd out at 16 PCIe lanes, whereas AMD 64 PCIe3 lanes (Threadripper) or 128 lanes (Threadripper Pro, EPYC server class CPUs).
20220127H:: Chat with the light microscope facility manager at Carneigie Institute of Embryology (which is on the JHU Homewood campus - my apology for not getting their full name at a Nikon SoRa demo) they told me that for CPUs require a PCIe bridge chip, that the key to performance is how many lanes are used for each of CPU - Bridge and Bridge - PCIe lanes and that this depends on the motherboard archietecture. For example, if the CPU - bridge is x4 lanes and bridge - PCIe lanes is x4, then throughput is going to be throttled by the x4 bottleneck, even if the PCIe slots are x16.
Upshot: regardless of CPU (Intel vs AMD) that your PCIe performancas depends on whether you purchase the right motherboard.
|
Intel's new CPUs - pay as you go feature upgrades
GM note: article avoids mentioning Microsoft Windows (Win10, 11, Server).
GM: nice idea in that few customers (at least consumers) would have >2 TB RAM on their new PC, so why charge a premium to all customers if that feature can be enabled by a software "patch" later. ... On the other hand, any feature upgradable by software can probably be hacked.
https://www.tomshardware.com/news/intel-software-defined-cpu-support-coming-to-linux-518
Intel's Pay-As-You-Go CPU Feature Gets Launch Window
By Anton Shilov - Feb 9, 2022
Intel's software-upgradeable CPUs to be supported by Linux 5.18 this Spring.
Intel's mysterious Software Defined Silicon (SDSi) mechanism for adding features to Xeon CPUs will be officially supported in Linux 5.18, the next major release of the operating system. SDSi allows users to add features to their CPU after they've already purchased it. Formal SDSi support means that the technology is coming to Intel's Xeon processors that will be released rather shortly, implying Sapphire Rapids will be the first CPUs with SDSi.
Intel Software Defined Silicon (SDSi) is a mechanism for activating additional silicon features in already produced and deployed server CPUs using the software. While formal support for the functionality is coming to Linux 5.18 and is set to be available this spring, Intel hasn't disclosed what exactly it plans to enable using its pay-as-you-go CPU upgrade model. We don't know how it works and what it enables, but we can make some educated guesses.
very generation of Intel Xeon CPUs adds multiple capabilities to make Intel's server platform more versatile. For example, in addition to microarchitectural improvements and new instructions, Intel's Xeon Scalable CPUs (of various generations) added support for up to 4.5TB of memory per socket, network function virtualization, Speed Select technology, and large SGX enclave size, just to name a few. In addition, there are optimized models for search, virtual machine density, infrastructure as a service (IaaS), software as a service (SaaS), liquid cooling, media processing, and so on. With its 4th Generation Xeon Scalable 'Sapphire Rapids' CPUs, Intel plans to add even more features specialized for particular use cases. You can see an example of the SKU stack above, and it includes all types of different Xeon models:
L- Large DDR Memory Support (up to 4.5TB)
M- Medium DDR Memory Support (up to 2TB)
N- Networking/Network Function Virtualization
S- Search
T- Thermal
V- VM Density Value
Y- Intel Speed Select Technology
But virtually none of Intel's customers need all the supported features, which is why Intel has to offer specialized models. There are 57 SKUs in the Xeon Scalable 3rd-Gen lineup, for example. But from a silicon point of view, all of Intel's Xeon Scalable CPUs are essentially the same in terms of the number of cores and clocks/TDP, with various functionalities merely disabled to create different models.
Intel certainly earns premium by offering workload optimized SKUs, but disabling certain features from certain models, then marking them appropriately and shipping them separately from other SKUs (shipped to the same client) is expensive — it can be tens of millions of dollars per year (or even more) of added logistical costs, not to mention the confusion added to the expansive product stack.
But what if Intel only offers base models of its Xeon Scalable CPUs and then allows customers to buy the extra features they need and enable them by using a software update? This is what SDSi enables Intel to do. Other use cases include literal upgrades of certain features as they become needed and/or repurposing existing machines. For example, if a data center needs to reconfigure CPUs in terms of clocks and TDPs, it would be able to buy that capability without changing servers or CPUs.
Advertisement
Intel yet has to disclose all the peculiarities of SDSi and its exact plans about the mechanism, but at this point, we are pretty certain that the technology will show up soon.
|
CPU packaging with on package HBM2E memory (likely server class format CPU) ... hmmm - 2E obsoleted by 3 specification (see bottom of table below)
Intel marketign claim big speed up CPU and some RAM "on package"
20220218Fri from TomsHardware - A. Shilov post (20220217Thur).
Reminder: "Marketing, marketing,, marketing, all is marketing saith the preacher" (re: Ecclesiastes). AMD Milan-X is (2022 H2) ZEN4 CPU family.
Not currently a product -- end of article: "Intel will ship its Xeon Scalable 'Sapphire Rapids' processors with on-package HBM2E memory in the second half of the year."
Near bottom of story: "The addition of on-package 64GB HBM2E memory increases bandwidth available to Intel Xeon 'Sapphire Rapids' processor to approximately 1.22 TB/s, or by four times when compared to a standard Xeon 'Sapphire Rapids' CPU with eight DDR5-4800 channels. This kind of uplift is very significant for memory bandwidth dependent workloads, such as computational fluid dynamics. "
https://www.tomshardware.com/news/intel-sapphire-rapids-with-hbm-is-2x-faster-than-amds-milan-x
Intel: Sapphire Rapids with HBM Is 2X Faster than AMD's Milan-X
By Anton Shilov published 20220217
In memory bound workloads.
Intel's fourth Generation Xeon Scalable 'Sapphire Rapids' processors can get a massive performance uplift from on-package HBM2E memory in memory-bound workloads, the company revealed on Thursday. The Sapphire Rapids CPUs with on-package HBM2E are about 2.8 times faster when compared to existing AMD EPYC 'Milan' and Intel Xeon Scalable 'Ice Lake' processors. More importantly, Intel is confident enough to say that its forthcoming part is two times faster than AMD's upcoming EPYC 'Milan-X.'
"Bringing [HBM2E memory into Xeon package] gives GPU-like memory bandwidth to CPU workloads," said Raja Koduri, the head of Intel's Intel's Accelerated Computing Systems and Graphics Group. "This offers many CPU applications, as much as four times more memory bandwidth. And they do not need to make any code changes to get benefit from this."
To prove its point, Intel took the OpenFOAM computational fluid dynamics (CFD) benchmark (28M_cell_motorbiketest) and ran it on its existing Xeon Scalable 'Ice Lake-SP' CPU, a sample of its regular Xeon Scalable 'Sapphire Rapids' processor, and a pre-production version of its Xeon Scalable 'Sapphire Rapids with HBM' CPU, revealing rather massive advantage that the upcoming CPUs will have over current platforms.
The difference that on-package HBM2E brings is indeed very significant: while a regular Sapphire Rapids is around 60% faster than an Ice Lake-SP, an HBM2E-equipped Sapphire Rapids brings in a whopping 180% performance boost.
What is perhaps more interesting is that Intel also compared performance of its future processors to an unknown AMD EPYC 'Milan' CPU (which performs just like Intel's Xeon 'Ice Lake', according to Intel and OpenBenchmarking.org) as well as yet-to-be-released EPYC 'Milan-X' processor that carries 256MB of L3 and 512MB of 3D V-Cache. Based on results from Intel, AMD's 3D V-Cache only improves performance by about 30%, which means that even a regular Sapphire Rapids will be faster than this part. By contrast, Intel's Sapphire Rapids with HBM2E will offer more than two times (or 115%) higher performance than Milan-X in OpenFOAM computational fluid dynamics (CFD) benchmark.
Performance claims like these made by companies must be verified by independent testers (especially given the fact that some other benchmark results show a different picture), but Intel seems to be very optimistic about its Sapphire Rapids processors equipped with HBM2E memory.
The addition of on-package 64GB HBM2E memory increases bandwidth available to Intel Xeon 'Sapphire Rapids' processor to approximately 1.22 TB/s, or by four times when compared to a standard Xeon 'Sapphire Rapids' CPU with eight DDR5-4800 channels. This kind of uplift is very significant for memory bandwidth dependent workloads, such as computational fluid dynamics. What is even more attractive is that developers do not need to change their code to take advantage of that bandwidth, assuming that the Sapphire Rapids HBM2E system is configured properly and HBM2E memory is operating in the right mode.
"Computational fluid dynamics is one of the applications that benefits from memory bandwidth performance," explained Koduri. "CFD is routinely used today inside a variety of HPC disciplines and industries significantly reducing product development, time and cost. We tested OpenFOAM, a leading open source HPC workload for CFD on a pre-production Xeon HBM2E system. As you can see it performs significantly faster than our current generation Xeon processor."
Intel will ship its Xeon Scalable 'Sapphire Rapids' processors with on-package HBM2E memory in the second half of the year.
***
part of Nov 15, 2021 story below
https://www.tomshardware.com/news/intels-sapphire-rapids-to-have-64-gigabytes-of-hbm2e-memory
All told, Intel's Sapphire Rapids data center chips come with up to 64GB of HBM2e memory, eight channels of DDR5, PCIe 5.0, and support for Optane memory and CXL 1.1, meaning they have a full roster of connectivity tech to take on AMD's forthcoming Milan-X chips that will come with a different take on boosting memory capacity.
While Intel has gone with HBM2e for Sapphire Rapids, AMD has decided to boost L3 capacity using hybrid bonding technology to provide up to 768MB of L3 cache per chip. Sapphire Rapids will also grapple with AMD's forthcoming Zen 4 96-core Genoa and 128-core Bergamo chips, both fabbed on TSMC's 5nm process. Those chips also support DDR5, PCIe 5.0, and CXL interfaces.
|
Computer specifications ... love 'em ... HBM3 mostly for data centers - potentially server rack(s)
https://hothardware.com/news/hbm3-specification-819gbs-bandwidth
by Paul Lilly — Friday, January 28, 2022
HBM3 Specification Leaves HBM2E In The Dust With 819GB/s Of Bandwidth
At long last, there's an official and finalized specification for the next generation of High Bandwidth Memory. JEDEC Solid State Technology Association, the industry group that develops open standards for microelectronics, announced the publication of the HBM3 specification, which nearly doubles the bandwidth of HBM2E. It also increase the maximum package capacity.
So what are we looking at here? The HBM3 specification calls for a doubling (compared to HBM2) of the per-pin data rate to 6.4 gigabits per second (Gb/s), which works out to 819 gigabytes per second (GB/s) per device.
To put those figures into perspective, HBM2 has a per-pin transfer rate of 3.2Gb/s equating to 410GB/s of bandwidth, while HBM2E pushes a little further with a 3.65Gb/s data rate and 460GB/s of bandwidth. So HBM3 effectively doubles the bandwidth of HBM2, and offers around 78 percent more bandwidth than HBM2E.
What paved the way for the massive increase is a doubling of the independent memory channels from eight (HBM2) to 16 (HBM3). And with two pseudo channels per channel, HBM3 virtually supports 32 channels.
Once again, the use of die stacking pushes capacities further. HBM3 supports 4-high, 8-high, and 12-high TSV stacks, and could expand to a 16-high TSV stack design in the future. Accordingly, it supports a wide range of densities from 8Gb to 32Gb per memory layer. That translates to device densities ranging from 4GB (4-high, 8Gb) all the way to 64GB (16-high, 32Gb). Initially, however, JEDEC says first-gen devices will be based on a 16Gb memory layer design.
"With its enhanced performance and reliability attributes, HBM3 will enable new applications requiring tremendous memory bandwidth and capacity," said Barry Wagner, Director of Technical Marketing at NVIDIA and JEDEC HBM Subcommittee Chair.
There's little-to-no chance you'll see HBM3 in NVIDIA's Ada Lovelace or AMD's RDNA 3 solutions for consumers. AMD dabbled with HBM on some of its prior graphics cards for gaming, but GDDR solutions are cheaper to implement. Instead, HBM3 will find its way to the data center.
SK Hynix pretty much said as much last year when it flexed 24GB of HBM3 at 819GB/s, which can transmit 163 Full HD 1080p movies at 5GB each in just one second. SK Hynix at the time indicated the primary destination will be high-performance computer (HPC) clients and machine learning (ML) platforms.
|
|
|
GPU |
NVidia Titan X ~3 teraflops double precision (PCIe gen 3).
Image core:
FISHscope: NVidia RTX 2080 Ti GPU 11GB (cellSens C.I. deconvolution).
Leica SP8 (HP Z640): NVidia M6000 GPU (Maxwell architecture = old, GPU that came with PC, no upgrades due to chassis & motherboard constraints) GPU used by SVI.nl Huygens GPU deconvolution ("HyVolution2" combination of Leica HyD detectors, Huygens).
GM PC: NVIDIA Quadro 2000 (no need for more modern card since not doing GPU deconvolution on this PC).
|
NVidia RTX 3090 Ti ~20 Teraflop double precision, PCIe gen 4 (Ampere architecture).
near futue: RTX 40x0 series ("Ada Lovelace" architecture [1 generation past Ampere in 30x0 models), 40 (maybe 50) Teraflop by end of 2022, PCIe gen5.
In 2021 NVidia software drivers -- on select modern PCIe4 motherboards - enabled "resiazable BAR" (see PC_Tips_2021) for bigger data transfers between GPU ram and main system RAM. Essentially, now can transfer 'unlimited' (except by GPU ram, since usually more system RAM) data, instead of historic limit of 256 MB memory aperture. This means, for example, a 1 GB Z-series could be moved onto (and results later off of) GPU in one transfer. Same with upcoming PCIe5 -- which is 2x throughput of PCIe4.
Note: I emphasize NVidia over AMD for GPUs because most (maybe all) deconvolution software has been developed to run on NVidia cards, notably SVI.nl Huygens, Microvolution (www.microvolution.com), AutoQuant (part of Media Cybernetics), various microscope vendors (often featuring "A.I"/ deep learning, which play well on NVidia 20x0 and 30x0 GPUs), such as Leica THUNDER/LIGHTNING, Olympus cellSens constrained iterative ("C.I.") deconvolution, Nikon Elements, Zeiss ZEN.
|
|
RAM |
64 GB was "a lot" in 2012 (and pricey).
Image core: several PCs 64 GB ram, one has 256 GB ram.
|
64 GB (PCIe gen4) was ~$600 in mid-2021 when GM purchased a new PC for home (PowerSpec G509 from MicroCenter) so ~$10 / GB.
2022: PCIe5 will play well with new fast RAM (DDR5, DDR6, on GPU GDDR6).
Some RAM prices
https://www.tomshardware.com/news/ddr5-availability-improving-prices-dropping
32GB dual-channel DDR5-4800, DDR5-5200, DDR5-5600, and DDR5-6000 kits.
Dec 2021 Late Jan 2022
Crucial DDR5-4800 CL40 $1000 $450 (URL below)
Kingston Fury Beast DDR5-5200 CL40 $1000 $428
G.Skill Trident Z DDR5-5600 CL36 $1000 $510
G.Skill Trident Z DDR5-6000 CL36 $3660 $810
Comparison June 2021 Late Jan 2022
DDR4-3600 (GM's home PC) $170 $140
https://www.amazon.com/gp/product/B0884TNHNC purchasedd as 2x32GB price above is per 32GB.
20220127Thur amazon URL and current price ($450 for 32GB stick, as in table above):
https://www.amazon.com/Crucial-4800MHz-Desktop-Memory-CT32G48C40U5/dp/B09HW97JVF
Crucial RAM 32GB DDR5 4800MHz CL40 Desktop Memory CT32G48C40U5
$450
I note that amazon prices are lower for two 16GB ram than one 32GB ram. Advantage of the latter is "more capacity per motherboard slot" (need to check with data sheet and manual of motherboard to make sure PC can support higher capacity RAM ships AND higher total RAM in PC -- also may need higer "tier" (higher price - though may be invislble in academia or big companies) of Windows 10 Enterprise or Windows 2016 Server (which need to be X64 -- no one should be using X32 [x86] in 2022 !!!), see
https://docs.microsoft.com/en-us/windows/win32/memory/memory-limits-for-windows-releases
Limits on memory and address space vary by platform, operating system, and by whether the IMAGE_FILE_LARGE_ADDRESS_AWARE value of the LOADED_IMAGE structure and 4-gigabyte tuning (4GT) are in use. IMAGE_FILE_LARGE_ADDRESS_AWARE is set or cleared by using the /LARGEADDRESSAWARE linker option.
6TB Windows 10 Enterprise 6TB onX64
6TB Windows 10 Pro for Workstations on X64 ... 4TB on X86
2TB Windows 10 Pro on X64 ... weird; 4TB on X86
128GB Windows 10 Home on X64 ...weird, 4TB on X86
24TB Windows Server 2016 Datacenter or 2016 Standard
4TB Windows Server 2012 Datacenter or 2012 Standard
HBM3 interface = fast (1/2022)
https://www.tomshardware.com/news/hbm3-spec-reaches-819-gbps-of-bandwidth-and-64gb-of-capacity
HBM3 Spec Reaches 819 GBps of Bandwidth and 64GB of Capacity (High Bandwidth Memory )
By Mark Tyson - January 28, 2022
Huge uplift over HBM2E's max 3.65 Gbps, 460 GBps.
The evolution of High Bandwidth Memory (HBM) continues with the JEDEC Solid State Technology Association finalizing and publishing the HBM3 specification today, with the standout features including up to 819 GBps of bandwidth coupled with up to 16-Hi stacks and 64GB of capacity.
We have seen telltale indicators of what to expect in prior months, with news regarding JEDEC member company developments in HBM3. In November, we reported on an SK hynix 24GB HBM3 demo, and Rambus announced its HBM3-ready combined PHY and memory controller with some detailed specs back in August, for example. However, it is good to see the JEDC specification now agreed so the industry comprising HBM makers and users can move forward. In addition, the full spec is now downloadable from JEDEC.
If you have followed the previous HBM3 coverage, you will know that the central promise of HBM3 is to double the per-pin data rate compared to HBM2. Indeed, the new spec specifies that HBM3 will provide a standard 6.4 Gbps data rate for 819 GBps of bandwidth. The key architectural change behind this speed-up is the doubling of the number of independent memory channels to 16. Moreover, HBM3 supports two pseudo channels per channel for virtual support of 32 channels.
Another welcome advance with the move to HBM3 is in potential capacity. With HBM die stacking using TSV technology, you gain capacity with denser packages plus higher stacks. HBM3 will enable from 4GB (8Gb 4-high) to 64GB (32Gb 16-high) capacities. However, JEDEC states that 16-high TSV stacks are for a future extension, so HBM3 makers will be limited to 12-high stacks maximum within the current spec (i.e., max 48GB capacity).
Meanwhile, the first HBM3 devices are expected to be based on 16Gb memory layers, says JEDEC. The range of densities and stack options in the HBM3 spec gives device makers a wide range of configurations.
If you have followed the previous HBM3 coverage, you will know that the central promise of HBM3 is to double the per-pin data rate compared to HBM2. Indeed, the new spec specifies that HBM3 will provide a standard 6.4 Gbps data rate for 819 GBps of bandwidth. The key architectural change behind this speed-up is the doubling of the number of independent memory channels to 16. Moreover, HBM3 supports two pseudo channels per channel for virtual support of 32 channels.
Another welcome advance with the move to HBM3 is in potential capacity. With HBM die stacking using TSV technology, you gain capacity with denser packages plus higher stacks. HBM3 will enable from 4GB (8Gb 4-high) to 64GB (32Gb 16-high) capacities. However, JEDEC states that 16-high TSV stacks are for a future extension, so HBM3 makers will be limited to 12-high stacks maximum within the current spec (i.e., max 48GB capacity).
Meanwhile, the first HBM3 devices are expected to be based on 16Gb memory layers, says JEDEC. The range of densities and stack options in the HBM3 spec gives device makers a wide range of configurations.
|
|
|
Ethernet |
1 Gbe = 128 MB/sec
1 GB/hour acquisition (confocal) or (say) 4 GB/hour FISHscope, implies network performance 10 Gbe = 1.25 GB/sec (practically 1 GB/sec, so 3600 GB/hour) is "quite good".
Image core: main PC's 10 Gbe Ethernet (1.25 GB/sec) (~$100 per PCIe3 card), connected by CAT-7 cables to Netgear 8-post 10 Gbe switch (purchase price in 2019 ~$640 so $80/port). GM thanks Kevin Murphy, PhD, for leading the cabling between rooms (Kevin was a postdoc at JHU at the time), and John Gibas (image core manager prior to GM) for contributions.
|
10 Gbe = 1.25 GB/sec (in our core 2+ years), ~$100 per PCIe gen3 card and ~$80 per port on 8-port Netgear switch. So $180 per PC.
future: 40, 56 or 100 Gbe = 5, 7 or 12.5 GB/sec (see drive speed above for comparison with "each end" of data transfer), . Prices will change over time, I'll estimate (1/2022) PCIe gen4 card $800 and $800 per port on fast switch, so ~$1600 per PC. For a lattice light sheet (LLS) microscope, probably worth doing "as soon as financially practical", for most confocal microscopes, can "defer to 2023 and maybe beyond".
|
|
PC monitors |
image core;
Acquisition PC's 27" or 32" LG monitor, some HD 4K.
GM PC: dual Dell 27" monitors on nice dual monitor stand.
|
In practice, one 32" HD 4K works very nicely for acquisition PCs - sometimes dual 32' monitors can work.
2022: new monitors brighter and wider viewing angles (Qdot), faster refresh rates possible (gaming). Still HD 4K usually most practical.
|
|
USB |
USB 3.0, 3.1, 3.2 |
USB4 (mid-2022) |
|
operating systems |
PC: Windows 10 Pro
File server: Windows Server 2012
|
PC: Windows 10 Pro (ram limit) or Windows 10 Enterprise (can access more RAM)
File server: Windows Server 2019(?)
|
|
non-volatile memory |
Optane (gen4?) vs CXL (2022, PCIe5) |
20220216W TomsHardware story starts with the potential demise of Intel Optane storage,
https://www.tomshardware.com/news/intel-optane-future-looks-gloomier-than-ever?utm_source=notification
then pivots to "CXL" which sounds pretty cool (and fast and less expensive):
CXL In, Optane Out?
Advertisement
There is another looming problem for Optane. The first products featuring the industry-standard Compute Express Link (CXL) coherent low-latency interconnect protocol are due to be available this year.
CXL enables sharing system resources over a PCIe 5.0 physical interface (PHY) stack without using complex memory management, thus assuring low latency. CXL is supported by AMD's upcoming EPYC 'Genoa,' Intel's forthcoming 'Sapphire Rapids,' and various Arm-powered server platforms.
The CXL 1.1 specification supports three protocols: the mandatory CXL.io (for storage devices), CXL.cache for cache coherency (for accelerators), and CXL.memory for memory coherency (for memory expansion devices). From a performance point of view, a CXL-compliant device will have access to 64 GB/s of bandwidth in each direction (128 GB/s in total) when plugged into a a PCIe 5.0 x16 slot.
PCIe 5.0 speeds are more than enough for upcoming 3D NAND-based SSDs to leave Intel's current Optane DC drives behind in terms of sequential read/write speeds, so unless Intel releases PCIe 5.0 Optane DC SSDs, its existing Optane DC SSDs will lose their appeal when next-gen server platforms emerge. We know that more PCIe Gen 4-based Optane DC drives are incoming, but we haven't seen any signs of PCIe 5.0 Optane DC SSDs.
Meanwhile, CXL.memory-supporting memory expansion devices with their low latency provide serious competition to proprietary Intel Optane Persistent Memory modules in terms of performance. Of course, PMem modules plugged into memory slots could still offer higher bandwidth than PCIe/CXL-based memory accelerators (due to the higher number of channels and higher data transfer rates). But these non-volatile DIMMs are still not as fast as standard memory modules, so they will find themselves between a rock (faster DRAMs) and a hard place (cheaper memory on PCIe/CXL expansion devices).
|
|
Run Windows and Win software on a Mac |
|
20220328M - note that this newatlas deals "article" is really an ad (one hint is lack of author).
newatlas deals (ad) For under $80, run Windows seamlessly on your Mac with Parallels PC
This deal offers the latest version of Parallels PC. Optimized for Windows 10 and 11, and macOS Monterey, Version 17 is faster and smoother than ever. Winner of PC Magazine’s 2021 Editor’s Choice award for virtualization software, a 1-year subscription can be yours for only $79.99, a 20% discount off the suggested retail price.
https://newatlas.com/deals/parallels-software-mac-pc
For under $80, run Windows seamlessly on your Mac with Parallels PC
March 24, 2022
Who’s funnier—Siri or Cortana? It may be a contest you’ll never be able to judge as your allegiance lies with Mac. But by installing Parallels PC on that Mac of yours, you can run most Windows apps, setting the stage for a voice assistant joke-off.
If you’re a tried and true Mac user, from your iPhone to your MacBook, from your iPad to your Apple Watch, we know that switching operating systems is not likely in your cards. But it does seem that there are some applications that just run better, or are only available using Windows. Popular programs such as Microsoft Office, Visual Studio, Quickbooks, Internet Explorer, and so many more can now easily be run on your MacBook, MacBook Pro, iMac, iMac Pro, Mac mini, or Mac Pro thanks to this emulation software.
Trusted by more than 7 million users and praised by experts, Parallels PC is easy to install and allows you to effortlessly run more than 200,000 Windows apps on your Mac without rebooting or slowing down your computer. You can run both operating systems side-by-side and even share files and folder, copy and paste images and text, and drag and drop files and content between the two of them.
This deal offers the latest version of Parallels PC. Optimized for Windows 10 and 11, and macOS Monterey, Version 17 is faster and smoother than ever. Winner of PC Magazine’s 2021 Editor’s Choice award for virtualization software, a 1-year subscription can be yours for only $79.99, a 20% discount off the suggested retail price.
And once you have it up and running, the Siri/Cortana competition can begin. Here’s a question that provides a bit of an ironic twist. Ask them both what the best computer is. Siri: "All truly intelligent assistants prefer Macintosh." Cortana: "Anything that runs Windows." Looks like you’ll be a winner with both!
Prices subject to change.
|
|
|
|
|
|
|
|
|