• d00ery@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    9 months ago

    Pi4 with 2TB SSD running:

    • Portainer
    • Calibre
    • qBittorrent
    • Kodi

    HDMI cable straight to the living room Smart TV (which is not connected to the internet).

    Other devices access media (TV shows, movies, books, comics, audiobooks) using VLC DLNA. Except for e-readers which just use the Calibre web UI.

    Main router is flashed with OpenWrt and running DNS adblocker. Ethernet running to 2nd router upstairs and to main PC. Small WiFi repeater with ethernet in the basement. It’s not a huge house, but it does have old thick walls which are terrible for WiFi propogation.

  • WaltzingKea@lemmy.nz
    link
    fedilink
    English
    arrow-up
    10
    ·
    9 months ago

    Bad. I have a Raspberry Pi 4 hanging from a HDMI cable going up to a projector, then have a 2TB SSD hanging from the Raspberry Pi. I host Nextcloud and Transmission on my RPi. Use Kodi for viewing media through my projector.

  • rambos@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    ·
    9 months ago

    1) DIY PC (running everything)

    • MSI Z270-A PRO
    • Intel G3930
    • 16GB DDR4
    • ATX PSU 550W
    • 250GB SSD for OS
    • 500GB SSD for data
    • 12TB HDD for backup + media

    2) Raspberry pi 4 4GB (running 2nd pihole instance)

  • iggy@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    9 months ago

    Internet:

    • 1G fiber

    Router:

    • N100 with dual 2.5G nics

    Lab:

    • 3x N100 mini PCs as k8s control plane+ceph mon/mds/mgr
    • 4x Aoostar R7 “NAS” systems (5700u/32G ram/20T rust/2T sata SSD/4T nvme) as ceph OSDs/k8s workers

    Network:

    • Hodge podge of switches I shouldn’t trust nearly as much as I do
    • 3x 8 port 2.5G switches (1 with poe for APs)
    • 1x 24 port 1G switch
    • 2x omada APs

    Software:

    • All the standard stuff for media archival purposes
    • Ceph for storage (using some manual tiering in cephfs)
    • K8s for container orchestration (deployed via k0sctl)
    • A handful of cloud-hypervisor VMs
    • Most of the lab managed by some tooling I’ve written in go
    • Alpine Linux for everything

    All under 120w power usage

    • slazer2au@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      How are you finding the AooStar R7? I have had my eye on it for a while but not much talk about it outside of YouTube reviews

  • RegalPotoo@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    9 months ago
    • An HP ML350p w/ 2x HT 8 core xeons (forget the model number) and 256GB DDR3 running Ubuntu and K3s as the primary application host
    • A pair of Raspberry Pi’s (one 3, one 4) as anycast DNS resolvers
    • A random minipc I got for free from work running VyOS as by border router
    • A Brocade ICX 6610-48p as core switch

    Hardware is total overkill. Software wise everything is running in containers, deployed into kubernetes using helmfile, Jenkins and gitea

  • Hemi03@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    9 months ago
    • Pico psu
    • Asrock n100m
    • Eaton3S mini UPS
    • 250gb OS Sata SSD
    • 4x sata 4t SSD’s
    • Pcie sata splitter

    All in a small PC Case

    sever is running YunoHost

  • Presi300@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    9 months ago

    I only use the highest of grade when it comes to hardware

    Case: found in the trash

    Motherboard: some random Asus AM3 board I got as a hand-me down.

    CPU: AMD FX-8320E (8 core)

    RAM: 16GB

    Storage: 5x2tb hdds + 128gb SSD and a 32GB flash drive as a boot device

    That’s it… My entire “homelab”

  • dan@upvote.au
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    9 months ago

    At home - Networking

    • 10Gbps internet via Sonic, a local ISP in the San Francisco Bay Area. It’s only $40/month.
    • TP-Link Omada ER8411 10Gbps router
    • MikroTik CRS312-4C+8XG-RM 12-port 10Gbps switch
    • 2 x TP-Link Omada EAP670 access points with 2.5Gbps PoE injectors
    • TP-Link TL-SG1218MPE 16-port 1Gbps PoE switch for security cameras (3 x Dahua outdoor cams and 2 x Amcrest indoor cams). All cameras are on a separate VLAN that has no internet access.
    • SLZB-06 PoE Zigbee coordinator for home automation - all my light switches are Inovelli Blue Zigbee smart switches, plus I have a bunch of smart plugs. Aqara temperature sensors, buttons, door/window sensors, etc.

    Home server:

    • Intel Core i5-13500
    • Asus PRO WS W680M-ACE SE mATX motherboard
    • 64GB server DDR5 ECC RAM
    • 2 x 2TB Solidigm P44 Pro NVMe SSDs in ZFS mirror
    • 2 x 20TB Seagate Exos X20 in ZFS mirror for data storage
    • 14TB WD Purple Pro for security camera footage. Alerts SFTP’d to offsite server for secondary storage
    • Running Unraid, a bunch of Docker containers, a Windows Server 2022 VM for Blue Iris, and an LXC container for a Bo gbackup server.

    For things that need 100% reliability like emails, web hosting, DNS hosting, etc, I have a few VPSes “in the cloud”. The one for my emails is an AMD EPYC, 16GB RAM, 100GB NVMe space, 10Gbps connection for $60/year at GreenCloudVPS in San Jose, and I have similar ones at HostHatch (but with 40Gbps instead of 10Gbps) in Los Angeles.

    I’ve got a bunch of other VPSes, mostly for https://dnstools.ws/ which is an open-source project I run. It lets you perform DNS lookup, pings, traceroutes, etc from nearly 30 locations around the world. Many of those are sponsored which means the company provides them for cheap/free in exchange for a backlink.

    This Lemmy server is on another GreenCloudVPS system - their ninth birthday special which has 9GB RAM and 99GB NVMe disk space for $99 every three years ($33/year).

  • thejevans@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    9 months ago

    https://pixelfed.social/p/thejevans/664709222708438068

    EDIT:

    Server:

    • AMD 5900x
    • 64GB RAM
    • 2x10TB HDD
    • RTX 3080
    • LSI-9208i HBA
    • 2x SFP+ NIC
    • 2TB NVMe boot drive

    Proxmox hypervisor:

    • TrueNAS VM (HBA PCIe passthrough)
    • HomeAssistant VM
    • Debian 12 LXC as SSH entrypoint and Ansible controller
    • Debian 12 VM with Ansible controlled docker containers
    • Debian 12 VM (GPU PCIe passthrough) with Jellyfin and other services that use GPU
    • Debian 12 VM for other docker stuff not yet controlled by Ansible and not needing GPU

    Router: N6005 fanless mini PC, 2.5Gbit NICs, pfsense

    Switch Mikrotik CRS 8-port 2.5Gbit, 2-port SFP+

      • thejevans@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 months ago

        I have a Kasm setup with blender and CAD tools, I use the GPU for transcoding video in Immich and Jellyfin, and for facial recognition in Immich. I also have a CUDA dev environment on there as a playground.

        I upgraded my gaming PC to an AMD 7900 XTX, so I can finally be rid of Nvidia and their gaming and wayland driver issues on Linux.

  • darganon@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    9 months ago

    I have a Lenovo TS140 in the laundry room, i3-4330, 16GB, 2TB of SSD running arch.

    In docker I am running:

    Plex, Wire guard, Qbittorrent, Pihole, my discord bot, nginx, and Teslamate.

    Works great, I’m probably going to swap my gaming rig in (5800x + 3080 12GB) with more RAM to host some AI stuff and the same services.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    7 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    AP WiFi Access Point
    CGNAT Carrier-Grade NAT
    DNS Domain Name Service/System
    Git Popular version control system, primarily for code
    HA Home Assistant automation software
    ~ High Availability
    HTTP Hypertext Transfer Protocol, the Web
    HTTPS HTTP over SSL
    IP Internet Protocol
    LTS Long Term Support software version
    LVM (Linux) Logical Volume Manager for filesystem mapping
    LXC Linux Containers
    NAS Network-Attached Storage
    NAT Network Address Translation
    NUC Next Unit of Computing brand of Intel small computers
    NVMe Non-Volatile Memory Express interface for mass storage
    PCIe Peripheral Component Interconnect Express
    PSU Power Supply Unit
    PiHole Network-wide ad-blocker (DNS sinkhole)
    Plex Brand of media server package
    PoE Power over Ethernet
    RAID Redundant Array of Independent Disks for mass storage
    RPi Raspberry Pi brand of SBC
    SAN Storage Area Network
    SATA Serial AT Attachment interface for mass storage
    SBC Single-Board Computer
    SSD Solid State Drive mass storage
    SSH Secure Shell for remote terminal access
    SSL Secure Sockets Layer, for transparent encryption
    VPN Virtual Private Network
    ZFS Solaris/Linux filesystem focusing on data integrity
    Zigbee Wireless mesh network for low-power devices
    k8s Kubernetes container management package
    nginx Popular HTTP server

    30 acronyms in this thread; the most compressed thread commented on today has 6 acronyms.

    [Thread #525 for this sub, first seen 18th Feb 2024, 06:05] [FAQ] [Full list] [Contact] [Source code]

  • synae[he/him]@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    9 months ago

    A 13-year-old former gaming computer, with 30TB storage in raid6 that runs *arrs, sabnzbd, and plex. Everything managed by k3s except plex.

    Also, 3-node digital ocean k8s cluster which runs services that don’t need direct access to the 30TB of storage, such as: grocy, jackett, nextcloud, a SOLID server, and soon a lemmy instance :)

  • Avid Amoeba@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    9 months ago

    Main site:

    • 5950X on a GA-AB350-Gaming 3
    • 64GB
    • 1TB NVMe mirrored
    • 24TB RAIDz1, using external USB 3 disks
    • Ubuntu LTS
    • 700Mbps uplink
    • OpenWrt on Pi 4 router
    • Home Assistant Yellow

    Off site:

    • ThinkCentre 715q
    • 2400GE
    • 8GB
    • 256GB NVMe
    • 24TB RAIDz1, using external USB 3 disks
    • Ubuntu LTS
    • 30Mbps uplink
    • OpenWrt on Pi 4 router

    Syncthing replicates data between the two. ZFS auto snapshots prevent accidental or malicious data loss at each site. Various services are running on both machines. Plex, Wiki.js, OpenProject, etc. Most are run in docker, managed via systemd. The main machine is also used as a workstation as well as games. The storage arrays are ghetto special - USB 3 external disks, some WD Elements, some Seagate in enclosures. I even used to have a 1T, a 3T and a 4T disk in an LVM volume pretending to be an 8T disk in one of the ZFS pools. The next time I have to expand the storage I’ll use second hand disks. The 5950X isn’t boosting as high as it should be able to on a chipset with PB2, but I got all those cores on a B350 board. 😆 Config management is done with SaltStack.

    • thejevans@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      I have a similar setup. I just recently switched to the ASRock Phantom X570 for $100. It’s a fantastic board at that price.

        • thejevans@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          I’ll have to double check, but I came from a B450 board. It definitely allowed me to run my RAM at a higher XMP profile (4x 3200MHz), and it has way better IOMMU groups. Each PCIe device gets its own group, so they can all be passed to different VMs.