1 of 7

Scanning at The University of Oslo, Norway��Espen Grøndahl�Head of IT-security

UNIVERSITETETS SENTER FOR INFORMASJONSTEKNOLOGI

2 of 7

The university network

  • 129.240.0.0/16 - "the managed network"
    • Strict rules, only approved operating systems, central logging, managed devices, managed patching, centrally managed user accounts
    • Some "lab-networks", separate print network, separate IoT network
    • But still a university.... - not everybody follow the rules
  • No firewall – but more and more strict acl 
    • ~ 1000 exposed ports according to shodan
    • > 20000 active hosts
  • Several other ipv4 and ipv6 networks

3 of 7

The scanner

  • We tried OpenVAS / Greenbone many years ago – too much noise, and too fragile devices
  • Nmap scan to database/ELK for almost 20 years
  • Thorsten and Asbjørn scanned a (larger) part of our network and found many things Nessus didn't find – caught our attention
  • We tried a VM with Kali first – but too slow for such a big network
    • 1st try: 4 cores and 4 Gb memory – redis died with oom error.
    • 2nd try: 8 cores and 8 Gb memory  - still trouble during massive scans.
  • Current: 8 cores and 10 Gb memory – working, but still slow web ui
  • Installed by ansible, ngnix frontend

4 of 7

The scan

  • Divided network in /20 segments – started 3-4 simultaneous scans
    • Default "full and fast" scan – 4 tests, 20 hosts
    • Scan took from a few hours to several days depending on number of hosts and services

  • The results:
    • > 15000 hosts
    • > 1000 000 events in the database
    • > 100 reports ( multiple scans on every network segment )
    • > 600 vulns

5 of 7

The results and handling

  • Very few false positives with default setup
  • Focused on real RCE and information leaks
  • 200 results handled
  • Some results handled as classes of errors – eg. many hosts with old jquery
  • Added 30 days override for cases reported to user or responsible unit 

6 of 7

Customization and automation

  • Web UI is slow with 1000 000 results
  • We (I) made a custom go service - gvm-helper
    • Direct access to the database

    • Extract result in JSON by network/mask, cvss, ++
    • Apply override to single ip (all results for given ip)
    • Automation is in beta
      • Automatic scan on IaaS network
      • If cvss > 9 automatic ticket to host owner

// Set up handlers

        mux := httprouter.New()

        mux.POST("/override/ip/:ip/:limit/:days/:drange", overridehandler)

        mux.GET("/results/ip/:ip/*rest", resultsbyiphandler)

        mux.GET("/results/self/:limit/:days", resultsselfhandler)

        mux.GET("/results/nvt/:ip/:mask/:nvt/:days", resultsbynvthandler)

        mux.GET("/results/uuid/:uuid", resultsbyuuidhandler)

        mux.GET("/info", infohandler)

        mux.GET("/dump", dumphandler)

7 of 7

The road ahead

  • More automation – results above a cvss level - send to owner
  • Remove useless signatures
  • Add owners to hosts (comment field) - lay the ground for more automation
  • On-demand scan initiated from host owner