Monday 13 November 2017

Rrdtool Bevegelse Gjennomsnittet


Jeg jobber med en stor mengde tidsserier. Disse tidsseriene er i utgangspunktet nettverksmålinger som kommer hvert 10. minutt, og noen av dem er periodiske (dvs. båndbredden), mens noen andre arent (dvs. mengden rutingstrafikk). Jeg vil gjerne ha en enkel algoritme for å gjøre en online utleder deteksjon. I utgangspunktet vil jeg beholde alle historiske data for hver tidsserie i minnet (eller på disken), og jeg vil oppdage en hvilken som helst utvider i et levende scenario (hver gang en ny prøve blir tatt). Hva er den beste måten å oppnå disse resultatene Jeg bruker for øyeblikket et glidende gjennomsnitt for å fjerne litt støy, men hva er de neste enkle ting som standardavvik, sint. mot hele datasettet virker det ikke bra (jeg kan ikke anta at tidsseriene er stasjonære), og jeg vil gjerne ha noe mer nøyaktig, helst en svart boks som: dobbelt outlierdetection (dobbel vektor, dobbel verdi) der vektoren er en rekke dobbeltholdige de historiske dataene, og returverdien er anomalitetspoeng for den nye samplingsverdien. spurte Aug 2 10 kl 20:37 Ja, jeg har antatt at frekvensen er kjent og spesifisert. Det er metoder for å estimere frekvensen automatisk, men det vil komplisere funksjonen betydelig. Hvis du må estimere frekvensen, kan du prøve å stille et eget spørsmål om det - og jeg vil nok gi svar. Men det trenger mer plass enn jeg har tilgjengelig i en kommentar. ndash Rob Hyndman Aug 3 10 kl 23:40 En god løsning vil ha flere ingredienser, blant annet: Bruk et motstandsdyktig, bevegelige vindu glatt for å fjerne ikke-stabilitet. Gi uttrykk for de opprinnelige dataene slik at residualene med hensyn til glatt er omtrent symmetrisk fordelt. Gitt dataene dine, er det sannsynlig at deres firkantede røtter eller logaritmer vil gi symmetriske gjenstander. Bruk kontroll diagrammet metoder, eller i det minste kontroll diagram tenkning, til residualene. Så langt som det siste går, viser kontrolldiagramtanken at konvensjonelle terskler som 2 SD eller 1,5 ganger IQR utover kvartilene virker dårlig, fordi de utløser for mange falske out-of-control signaler. Folk bruker vanligvis 3 SD i kontrolldiagramarbeid, hvorav 2,5 (eller til og med 3) ganger IQR utover kvartilene ville være et godt utgangspunkt. Jeg har mer eller mindre skissert naturen til Rob Hyndmans løsning mens jeg legger til to hovedpunkter: Det potensielle behovet for å gi uttrykk for dataene og visdommen om å være mer konservativ når det gjelder å signalere en outlier. Jeg er ikke sikker på at Loess er bra for en elektronisk detektor, men fordi det ikke fungerer bra på sluttpunktene. Du kan i stedet bruke noe så enkelt som et bevegelig medianfilter (som i Tukeys resistente utjevning). Hvis utjevnene ikke kommer i utbrudd, kan du bruke et smalt vindu (5 datapunkter, kanskje, som bare vil bryte ned med en utbrudd på 3 eller flere avvikere innenfor en gruppe på 5). Når du har utført analysen for å bestemme en god re-ekspresjon av dataene, vil du sannsynligvis ikke endre re-uttrykket. Derfor trenger nettleseren din bare å referere til de nyeste verdiene (det siste vinduet) fordi det ikke vil bruke de tidligere dataene i det hele tatt. Hvis du har veldig lange tidsserier, kan du gå videre for å analysere autokorrelasjon og sesongmessighet (som gjentatte daglige eller ukentlige svingninger) for å forbedre prosedyren. besvart aug 26 10 kl 18:02 John, 1,5 IQR er Tukey39s opprinnelige anbefaling for de lengste whiskers på en boksplott og 3 IQR er hans anbefaling for markeringspoeng som kvoter outliersquot (en riff på en populær 6039-setning). Dette er bygget inn i mange boxplot-algoritmer. Anbefalingen er teoretisk analysert i Hoaglin, Mosteller, Amp Tukey, Understanding Robust og Exploratory Data Analysis. ndash w huber 9830 okt 9 12 kl 21:38 Dette bekrefter tidsseriedata jeg har prøvd å analysere. Vinduet gjennomsnitt og også en standard standardavvik. ((x - avg) sd) gt 3 synes å være poengene jeg vil flagge som utelukker. Vel, vær så snill som advarsler, flagg jeg noe høyere enn 10 sd som ekstreme feilutviklere. Problemet jeg løper inn er det som er en ideell vinduslengde 395m med noe mellom 4-8 datapunkter. ndash NeoZenith Jun 29 16 at 8:00 Neo Din beste innsats kan være å eksperimentere med en delmengde av dataene dine og bekrefte konklusjonene dine med tester på resten. Du kan også gjennomføre en mer formell kryssvalidering (men det er nødvendig med forsiktighet med tidsseriedata på grunn av gjensidig avhengighet av alle verdiene). ndash w huber 9830 Jun 29 16 kl 12:10 (Dette svaret reagerte på et duplikat (nå lukket) spørsmål ved å oppdage utestående hendelser. Som presentert noen data i grafisk form.) Utleder detektering avhenger av dataens natur og hva du er villige til å anta om dem. Generelle metoder bygger på robust statistikk. Ånden i denne tilnærmingen er å karakterisere størstedelen av dataene på en måte som ikke påvirkes av noen avvikere og deretter peke på noen individuelle verdier som ikke passer inn i den karakteriseringen. Fordi dette er en tidsserie, legger det til komplikasjonen av å måtte (gjenoppdage) avvikere på en kontinuerlig basis. Hvis dette skal gjøres når serien utfolder seg, kan vi bare bruke eldre data for deteksjonen, ikke fremtidige data. For å beskytte mot de mange gjentatte tester vil vi gjerne bruke en metode som har svært lite falsk positiv rente. Disse overvejingene antyder at du kjører en enkel, robust flyttevinduutgangstest over dataene. Det er mange muligheter, men en enkel, lett forståelig og lett implementert en er basert på en løpende MAD: median absolutt avvik fra medianen. Dette er et sterkt robust mål for variasjon i dataene, i likhet med en standardavvik. En ekstern topp ville være flere MAD eller mer større enn medianen. Det er fortsatt noen tuning som skal gjøres. hvor mye av avvik fra hovedparten av dataene bør betraktes som eksternt og hvor langt tilbake i tid bør man se. La oss la disse være parametere for eksperimentering. Heres en R-implementering brukes på data x (1,2, ldots, n) (med n1150 å emulere dataene) med tilsvarende verdier y: Brukes til et datasett som den røde kurven illustrert i spørsmålet, produserer dette resultatet: Dataene vises i rødt, 30-dagers vinduet med median5MAD-grenseverdier i grått, og utjevningene - som bare er de dataværdiene over den grå kurven - i svart. (Terskelen kan bare beregnes fra begynnelsen av innledningsvinduet. For alle data i dette innledende vinduet brukes den første terskelen: derfor er den grå kurven flat mellom x0 og x30.) Effektene ved å endre parametrene er (a) øker verdien av vinduet en tendens til å glatte ut den grå kurven og (b) økende terskel vil øke den grå kurven. Å vite dette kan man ta et innledende segment av dataene og raskt identifisere verdier av parametrene som best adskiller de ytre toppene fra resten av dataene. Bruk disse parameterverdiene for å sjekke resten av dataene. Hvis et diagram viser at metoden er forverret over tid, betyr det at dataenes natur endrer seg og parametrene kan trenge å justeres. Legg merke til hvor lite denne metoden antar om dataene: De trenger ikke å bli distribuert normalt, de trenger ikke å vise noen periodicitet de ikke engang må være ikke-negative. Alt det antas, er at dataene oppfører seg på rimelig lignende måter over tid, og at de ytre toppene er synlig høyere enn resten av dataene. Hvis noen vil gjerne eksperimentere (eller sammenligne noen annen løsning med den som tilbys her), her er koden jeg brukte til å produsere data som de som er vist i spørsmålet. Jeg gjetter sofistikert tidsseriemodell vil ikke fungere for deg på grunn av den tiden det tar å oppdage avvikere ved hjelp av denne metoden. Derfor er det her en løsning: Først opprett en normal trafikkmønster i et år basert på manuell analyse av historiske data som står for tidspunkt på dagen, ukedag vs helg, måned på året etc. Bruk denne grunnlinjen sammen med en enkel mekanisme (for eksempel bevegelige gjennomsnitt foreslått av Carlos) for å oppdage avvikere. Du vil kanskje også vurdere den statistiske prosesskontrolllitteraturen for noen ideer. Ja, dette er akkurat det jeg gjør: til nå deler jeg signalet manuelt i perioder, slik at jeg for hver av dem kan definere et konfidensintervall der signalet skal være stasjonært, og derfor kan jeg bruke standardmetoder som som standardavvik. Det virkelige problemet er at jeg ikke kan bestemme det forventede mønsteret for alle signalene jeg må analysere, og derfor søker jeg etter noe mer intelligent. ndash gianluca Aug 2 10 kl 21:37 Her er en ide: Trinn 1: Implementer og estimer en generisk tidsseriemodell på en gang basert på historiske data. Dette kan gjøres offline. Trinn 2: Bruk den resulterende modellen til å oppdage avvikere. Trinn 3: Omkalibrere tidsseriemodellen (dette kan gjøres frakoblet), med en eller annen frekvens (kanskje hver måned), slik at trinn 2-deteksjon av utjevningsmidler ikke går for mye ut av dagens trafikkmønstre. Ville det fungere for konteksten din ndash user28 Aug 2 10 kl 22:24 Ja, dette kan fungere. Jeg tenkte på en lignende tilnærming (omdanner grunnlinjen hver uke, som kan være CPU-intensiv hvis du har hundrevis av univariate tidsserier for å analysere). BTW Det virkelige vanskelige spørsmålet er hva er den beste blackbox-stilalgoritmen for modellering av et helt generisk signal, vurderer støy, trendestimering og seasonalityquot. AFAIK, hver tilnærming i litteraturen krever en veldig hard quotparameter tuningquot-fase, og den eneste automatiske metoden jeg fant er en ARIMA-modell av Hyndman (robjhyndmansoftwareforecast). Jeg savner noe ndash gianluca Aug 2 10 kl 22:38 Igjen, dette virker ganske bra hvis signalet skal ha en sesongmessig sånn, men hvis jeg bruker en helt annen tidsserie (dvs. gjennomsnittlig TCP rundtur tid over tid ), vil denne metoden ikke fungere (siden det ville være bedre å håndtere det med en enkel global gjennomsnittlig og standardavvik ved å bruke et skyvevindu som inneholder historiske data). ndash gianluca Aug 2 10 kl 22:02 Med mindre du er villig til å implementere en generell tidsserie modell (som bringer inn sine ulemper med hensyn til latens osv.) er jeg pessimistisk at du vil finne en generell gjennomføring som samtidig er enkel nok å jobbe for alle slags tidsserier. ndash user28 Aug 2 10 kl 22:06 En annen kommentar: Jeg vet at et godt svar kan være quotso du kan estimere signalets periodicitet og bestemme algoritmen for å bruke i henhold til itquot, men jeg fant ikke en virkelig god løsning på denne andre problem (jeg spilte litt med spektralanalyse ved hjelp av DFT og tidsanalyse ved hjelp av autokorrelasjonsfunksjonen, men min tidsserie inneholder mye støy og slike metoder gir noen vanlige resultater mesteparten av tiden) ndash gianluca Aug 2 10 kl 22:06 A kommentere din siste kommentar: det er derfor jeg leter etter en mer generisk tilnærming, men jeg trenger en slags quotblack boxquot fordi jeg ikke kan gjøre noen antagelse om det analyserte signalet, og derfor kan jeg ikke opprette kvoteparameteren for læringalgoritmoten. ndash gianluca Aug 2 10 kl 22:09 Siden det er en tidsserie data, vil et enkelt eksponensielt filter en. wikipedia. orgwikiExponentialsmoothing glatte dataene. Det er et veldig godt filter siden du ikke trenger å samle gamle datapunkter. Sammenlign alle nyliggjorte dataverdier med sin ujevne verdi. Når avviket overskrider en bestemt forhåndsdefinert terskel (avhengig av hva du mener er en utjevneren i dataene dine), kan din utleder lett oppdages. besvart 30 april 15 kl. 8:50 Du kan bruke standardavviket fra de siste N-målingene (du må velge en egnet N). En god anomalie score ville være hvor mange standardavvik en måling er fra det bevegelige gjennomsnittet. svarte aug 2 10 kl 20:48 Takk for svaret ditt, men hva hvis signalet viser høy sesongmessighet (dvs. mange nettmålinger er preget av et daglig og ukentlig mønster på samme tid, for eksempel natt vs dag eller helg mot arbeidsdager) En tilnærming basert på standardavvik vil ikke fungere i det tilfellet. ndash gianluca Aug 2 10 kl 20:57 Hvis jeg for eksempel får en ny prøve hvert 10. minutt, og jeg gjør en ekstern oppdagelse av nettverksbåndbreddebruken av et selskap, i utgangspunktet klokka 18.00, vil dette tiltaket falle ned (dette er en forventet et totalt normalt mønster), og et standardavvik beregnet over et skyvevindu vil mislykkes (fordi det vil utløse et varsel sikkert). Samtidig, hvis målet faller ned klokka 16:00 (avviker fra vanlig utgangspunkt), er dette en ekte utvider. ndash gianluca aug 2 10 kl 20:58 hva jeg gjør er å gruppere målingene etter klokkeslett og ukedag, og sammenlign standardavvik av det. Fortsatt korrigerer ikke for ting som ferie og sommervinters sesongmessighet, men det er riktig det meste av tiden. Ulempen er at du virkelig trenger å samle et år med data for å få nok slik at stddev begynner å gi mening. Spektralanalyse registrerer periodicitet i stasjonære tidsserier. Frekvensdomene tilnærming basert på spektral tetthets estimering er en tilnærming jeg vil anbefale som ditt første skritt. Hvis uregelmessigheter i visse perioder betyr en mye høyere topp enn det som er typisk for den perioden, ville serien med slike uregelmessigheter ikke være stasjonær og spektral anslisning ikke ville være hensiktsmessig. Men hvis du antar at du har identifisert perioden som har uregelmessighetene, bør du kunne bestemme omtrent hva den normale topphøyden ville være, og da kan du sette en terskel på noe nivå over det gjennomsnittet for å utpeke de uregelmessige tilfellene. besvart 3 september 12 kl 14:59 Jeg foreslår ordningen nedenfor, som skal kunne implementeres på en dag eller så: Samle så mange prøver som du kan holde i minnet. Fjern åpenbare avvikere ved å bruke standardavviket for hvert attributt. Beregn og lagre korrelasjonsmatrisen og også gjennomsnittet av hvert attributt Beregn og lagre Mahalanobis avstandene til alle dine prøver. Beregne utløpsvanskeligheten: For den enkle prøven som du vil vite dens utjevnhet: Hent midlene, kovariansmatrise og Mahalanobis avstand s fra trening. Beregn Mahalanobis avstand d for prøven Returner prosentilen der d faller (ved hjelp av Mahalanobis avstandene fra treningen) Det vil være din outlier score: 100 er en ekstrem outlier. PS. Ved beregning av Mahalanobis avstanden. bruk korrelasjonsmatrisen, ikke kovariansmatrisen. Dette er mer robust hvis prøvemålingene varierer i enhet og nummer. Grafit 1 utfører to ganske enkle oppgaver: lagring av tall som endrer seg over tid og grafer dem. Det har vært mye programvare skrevet gjennom årene for å gjøre de samme oppgavene. Det som gjør Graphite unikt, er at det gir denne funksjonaliteten som en nettverkstjeneste som både er enkel å bruke og svært skalerbar. Protokollen for å mate data inn i grafitt er enkel nok til at du kan lære å gjøre det for hånd om noen få minutter (ikke det du egentlig vil, men det er en anstendig litmus test for enkelhet). Gjenopprette grafer og hente datapunkter er like enkelt som å hente en URL. Dette gjør det veldig naturlig å integrere Graphite med annen programvare og lar brukerne bygge kraftige applikasjoner på toppen av Graphite. En av de vanligste bruksområdene til Graphite er å bygge nettbaserte dashboards for overvåking og analyse. Grafitt ble født i et høyvolum e-handelsmiljø, og utformingen gjenspeiler dette. Skalerbarhet og sanntidstilgang til data er viktige mål. Komponentene som tillater Graphite å oppnå disse målene, er et spesialisert databibliotek og dets lagringsformat, en caching-mekanisme for optimalisering av IO-operasjoner, og en enkel, men effektiv metode for grafikk-servere. I stedet for bare å beskrive hvordan Graphite fungerer i dag, vil jeg forklare hvordan grafitt ble implementert (ganske naivt), hvilke problemer jeg kjørte inn i, og hvordan jeg utviklet løsninger for dem. 7.1. Databasebiblioteket: Lagring av Time-Series Data Graphite er skrevet helt i Python og består av tre hovedkomponenter: et databasebibliotek som heter Whisper. en back-end daemon kalt karbon. og en front-end webapp som gjør grafikker og gir et grunnleggende brukergrensesnitt. Mens hviske ble skrevet spesielt for grafitt, kan den også brukes uavhengig. Det er svært likt i design til round-robin-databasen som brukes av RRDtool, og lagrer bare tidsserier numeriske data. Vanligvis tenker vi på databaser som serverprosesser som klientprogrammer snakker med over stikkontakter. Men hviske. mye som RRDtool, er et databasebibliotek som brukes av applikasjoner til å manipulere og hente data lagret i spesielt formaterte filer. De mest grunnleggende hviskeoperasjonene er opprettet for å lage en ny hviskefil, oppdatering for å skrive nye datapunkter til en fil, og hente for å hente datapunkter. Figur 7.1: Grunnleggende anatomi av en hviskefil Som vist i figur 7.1. hviske filer består av en header seksjon som inneholder ulike metadata, etterfulgt av en eller flere arkiv seksjoner. Hvert arkiv er en sekvens av sammenhengende datapunkter som er (tidsstempel, verdi) par. Når en oppdatering eller hent-operasjon utføres, bestemmer hvisken forskjellen i filen der data skal skrives til eller leses fra, basert på tidsstempel og arkivkonfigurasjon. 7.2. Den bakre enden: En enkel lagringstjeneste Graphites back end er en demon-prosess kalt karbon-cache. vanligvis bare referert til som karbon. Den er bygget på Twisted, et høyt skalerbart hendelse-drevet IO-rammeverk for Python. Twisted gjør at karbon effektivt kan snakke med et stort antall kunder og håndtere en stor mengde trafikk med lavt overhead. Figur 7.2 viser dataflyten mellom karbon. hviske og webapp: Klientprogrammer samler inn data og sender den til grafittens bakre ende, karbon. som lagrer dataene ved hjelp av hviske. Disse dataene kan da brukes av Graphite webapp for å generere grafer. Figur 7.2: Data Flow Den primære funksjonen av karbon er å lagre datapunkter for beregninger levert av klienter. I grafittterminologi er en metrisk en målbar mengde som kan variere over tid (som CPU-utnyttelse av en server eller antall salg av et produkt). Et datapunkt er bare et (tidsstempel, verdi) par som tilsvarer den målte verdien av en bestemt metrisk på et tidspunkt. Metrics er unikt identifisert av deres navn, og navnet på hver metrisk samt datapunkter er gitt av klientprogrammer. En vanlig type klientapplikasjon er et overvåkingsmiddel som samler system - eller applikasjonsstatistikk, og sender sine samlede verdier til karbon for enkel lagring og visualisering. Metrics i Graphite har enkle hierarkiske navn, som ligner på filsystembaner, bortsett fra at en prikk brukes til å avgrense hierarkiet i stedet for et skråstrek eller tilbakestrek. karbon vil respektere et lovlig navn og oppretter en hviskefil for hver beregning for å lagre datapunkter. Vispfilene lagres i katalysens datakatalog i et filsystemhierarki som speiler det dotavgrensede hierarkiet i hvert målnavn, slik at (for eksempel) servere. www01.cpuUsage-kart til hellipserverswww01cpuUsage. wsp. Når en klientapplikasjon ønsker å sende datapunkter til grafitt, må det opprette en TCP-tilkobling til karbon. Vanligvis på port 2003 2. Klienten sender ikke alt snakkende karbon noe over tilkoblingen. Klienten sender datapunkter i et enkelt format i ren tekst, mens forbindelsen kan stå åpen og gjenbrukes etter behov. Formatet er en linje med tekst per datapunkt hvor hver linje inneholder det stiplede metriske navnet, verdien og en Unix epok tidsstempel adskilt av mellomrom. For eksempel kan en klient sende: På et høyt nivå, er alt karbon gjør, lytte etter data i dette formatet, og prøv å lagre det på disken så raskt som mulig ved å bruke hviske. Senere vil vi diskutere detaljer om noen triks som brukes til å sikre skalerbarhet og få den beste ytelsen vi kan ut av en typisk harddisk. 7.3. Frontend: Grafer på forespørsel Grafitt-webappen lar brukerne be om egendefinerte grafer med en enkel URL-basert API. Graferingsparametere er angitt i forespørselsstrengen i en HTTP GET-forespørsel, og et PNG-bilde returneres som svar. For eksempel krever nettadressen en 500times300-graf for de metriske serverne. www01.cpuUsage og de siste 24 timers dataene. Egentlig er det bare målparameteren som kreves. Alle de andre er valgfrie og bruker standardverdiene hvis de utelates. Grafitt støtter et bredt utvalg av skjermalternativer, samt data manipulasjonsfunksjoner som følger en enkel funksjonell syntaks. For eksempel kan vi grave et 10-punkts glidende gjennomsnitt av metriske i vårt forrige eksempel som dette: Funksjoner kan nestes, slik at det blir komplekse uttrykk og beregninger. Her er et annet eksempel som gir løpende total salg for dagen ved bruk av per-produkt-beregninger av salg per minutt: SumSeries-funksjonen beregner en tidsserie som er summen av hver beregning som matcher mønsterproduktene ..salgPerMinute. Så beregner integral en løpende total i stedet for en per-minutters telling. Herfra er det ikke for vanskelig å forestille seg hvordan man kan bygge en webgrensesnitt for visning og manipulering av grafer. Grafitt kommer med sin egen komponentbruker, vist i figur 7.3. det gjør dette ved hjelp av Javascript for å endre nettadresseparametrene for grafene når brukeren klikker gjennom menyene for de tilgjengelige funksjonene. Figur 7.3: Grafittkomponistgrensesnitt 7.4. Dashboards Siden begynnelsen Graphite har blitt brukt som et verktøy for å lage nettbaserte dashboards. URL-API-en gjør dette til et naturlig bruk. Å lage et dashbord er like enkelt som å lage en HTML-side full av koder som dette: Men ikke alle liker å lage nettadresser for hånd, så Graphites Composer brukergrensesnitt gir en pek-og-klikk-metode for å lage en graf som du enkelt kan kopiere og lim inn nettadressen. Når det kombineres med et annet verktøy som gir rask opprettelse av nettsider (som en wiki), blir det enkelt nok at ikke-tekniske brukere kan bygge egne oversikter ganske enkelt. 7.5. En åpenbar flaskehals Når brukerne begynte å bygge instrumentpaneler, begynte grafitt raskt å ha ytelsesproblemer. Jeg undersøkte webserverloggene for å se hvilke forespørsler som ble lagt ned. Det var ganske tydelig at problemet var det rene antallet grafiske forespørsler. Webappen var CPU-bundet, gjengivelse av grafer hele tiden. Jeg la merke til at det var mange identiske forespørsler, og instrumentbrettene var skyldige. Tenk deg at du har et dashbord med 10 grafer i den og siden oppdateres en gang i minuttet. Hver gang en bruker åpner dashbordet i nettleseren, må Graphite håndtere 10 flere forespørsler per minutt. Dette blir raskt dyrt. En enkel løsning er å gjengi hver graf bare én gang og deretter servere en kopi av den til hver bruker. Django-nettverket (som Graphite er bygget på) gir en utmerket caching-mekanisme som kan bruke forskjellige bakender som memcached. Memcached 3 er i hovedsak et hashbord gitt som en nettverkstjeneste. Klientprogrammer kan få og sette nøkkelverdigepar, akkurat som et vanlig hashbord. Hovedfordelen ved å bruke memcached er at resultatet av en dyr forespørsel (som gjengivelse av en graf) kan lagres svært raskt og hentes senere for å håndtere påfølgende forespørsler. For å unngå å returnere de samme foreldede grafene for alltid, kan memcached konfigureres for å utelukke de bufrete grafene etter en kort periode. Selv om dette er bare noen få sekunder, er byrden det tar av grafitt enormt fordi dupliserte forespørsler er så vanlige. Et annet vanlig tilfelle som skaper mange gjengivelsesforespørsler, er når en bruker tilpasser skjermalternativene og bruker funksjoner i Composer-brukergrensesnittet. Hver gang brukeren endrer noe, må grafitt redraw grafen. De samme dataene er involvert i hver forespørsel, så det er fornuftig å sette underliggende data i memcache også. Dette holder brukergrensesnittet lydhør overfor brukeren fordi trinnet med å hente data blir hoppet over. 7.6. Optimaliserer IO Forestill deg at du har 60 000 beregninger som du sender til grafitt-serveren din, og hver av disse beregningene har ett datapunkt per minutt. Husk at hver metrisk har sin egen hviskefil på filsystemet. Dette betyr at karbon må gjøre en skriveoperasjon til 60 000 forskjellige filer hvert minutt. Så lenge kulstof kan skrive til en fil hver millisekund, skal den kunne holde seg oppdatert. Dette er ikke for langt hentet, men vi kan si at du har 600.000 beregninger oppdatering hvert minutt, eller beregningene dine oppdateres hvert sekund, eller kanskje du ikke har råd til rask nok lagring. Uansett hva som er tilfelle, anta at antallet innkommende datapunkter overstiger hastigheten på skriveoperasjoner som lagringen din kan følge med. Hvordan skal denne situasjonen håndteres De fleste harddisker i disse dager har langsom søketid 4. Det vil si forsinkelsen mellom å gjøre IO-operasjoner på to forskjellige steder, sammenlignet med å skrive en sammenhengende datasekvens. Dette betyr jo mer sammenhengende skriving vi gjør, jo mer gjennomstrømning får vi. Men hvis vi har tusenvis av filer som må skrives til ofte, og hver skriving er veldig liten (ett visst datapunkt er bare 12 byte), så vil diskene våre definitivt tilbringe mesteparten av tiden deres søker. Arbeid under forutsetningen om at hastigheten på skriveoperasjoner har et relativt lavt tak, er den eneste måten å øke datapunktets gjennomstrømning utover den hastigheten, å skrive flere datapunkter i en enkelt skriveoperasjon. Dette er mulig fordi hviske ordner sammenhengende datapunkter sammen på disken. Så jeg la en oppdateringsfunksjon til å hviske. som tar en liste over datapunkter for en enkelt metrisk og komprimerer sammenhengende datapunkter til en enkelt skriveoperasjon. Selv om dette gjorde hver skrive større, er forskjellen i tid det tar å skrive ti datapunkter (120 byte) versus ett datapunkt (12 byte) ubetydelig. Det tar ganske mange datapunkter før størrelsen på hver skrive begynner å merkbart påvirke latensen. Deretter implementerte jeg en buffermekanisme i karbon. Hvert innkommende datapunkt blir kartlagt til en kø basert på det metriske navnet og legges deretter til den køen. En annen tråd gjentas gjentatte ganger gjennom alle køene, og for hver enkelt trekker den alle datapunktene ut og skriver dem til den aktuelle hviskenfilen med oppdatert innhold. Hvis vi går tilbake til vårt eksempel, hvis vi har 600 000 målinger oppdateres hvert minutt og lagringen vår bare kan holde opp med 1 skriv per millisekund, så vil køene ende opp med å holde om lag 10 datapunkter hver i gjennomsnitt. Den eneste ressursen dette koster oss er minne, som er relativt rikelig siden hvert datapunkt er bare noen få byte. Denne strategien buffer dynamisk så mange datapoints som nødvendig for å opprettholde en mengde innkommende datapoints som kan overstige frekvensen av IO-operasjoner som lagringen din kan følge med. En god fordel med denne tilnærmingen er at den legger til en grad av resiliency for å håndtere midlertidige IO-nedganger. Hvis systemet trenger å gjøre andre IO-arbeid utenfor grafitt, er det sannsynlig at hastigheten på skriveoperasjoner vil redusere, i så fall vil karbon køer bare vokse. Jo større køene er, desto større skriver. Siden den totale gjennomstrømningen av datapunkter er lik hastigheten på skriveoperasjoner ganger gjennomsnittlig størrelse på hver skriv, er karbon i stand til å fortsette så lenge det er nok minne for køene. karbon s kjølemekanisme er vist i figur 7.4. Figur 7.4: Kulemekanisme 7.7. Å holde det i sanntid Buffering datapunkter var en fin måte å optimalisere karbon s IO, men det tok ikke lang tid for brukerne mine å legge merke til en ganske plagsom bivirkning. Gjennomgang av vårt eksempel igjen, vi har 600.000 beregninger som oppdateres hvert minutt, og antas at lagringen vår bare kan holde opp med 60.000 skriveoperasjoner per minutt. Dette betyr at vi vil ha omtrent 10 minutter verdt data å sitte i karbon køer til enhver tid. For en bruker betyr dette at grafene de ber om fra Graphite webapp, vil bli savnet de siste 10 minuttene av dataene: Ikke bra Heldigvis er løsningen ganske rett frem. Jeg har ganske enkelt lagt til en socket lytter til karbon som gir et spørringsgrensesnitt for å få tilgang til de bufferte datapunktene og deretter modifiserer Graphite webapp for å bruke dette grensesnittet hver gang det er nødvendig å hente data. Webappen kombinerer da datapunktene den henter fra karbon med datapunktene det hentes fra disk og voila, grafene er sanntids. Gitt, i vårt eksempel oppdateres datapunktene i minuttet og dermed ikke akkurat i sanntid, men det faktum at hvert datapunkt er umiddelbart tilgjengelig i en graf når det er mottatt av karbon, er sanntid. 7.8. Kjerner, kufferter og katastrofale feil Som det fremdeles er åpenbart nå, er en viktig egenskap for systemytelse som Graphites egen ytelse avhenger av, at IO er latent. Så langt har vi antatt at systemet vårt har konsekvent lav IO latency gjennomsnittlig rundt 1 millisekund per skriv, men dette er en stor antagelse som krever litt dypere analyse. De fleste harddisker bare arent så fort selv med dusinvis av disker i en RAID-array, er det svært sannsynlig at det er mer enn 1 millisekund latens for tilfeldig tilgang. Likevel, hvis du skulle forsøke å teste hvor raskt en gammel bærbar datamaskin kunne skrive en helt kilobyte til disk, ville du oppdage at skrivesystemet ringer tilbake på langt mindre enn 1 millisekund. Hvorfor Når programvare har inkonsekvente eller uventede ytelsesegenskaper, er det vanligvis å klandre buffering eller caching. I dette tilfellet var det å håndtere begge deler. Skrivesystemanropet skriver ikke teknisk data på disken, det legger det bare i en buffer som kjernen skriver til disken senere. Dette er grunnen til at skriveanropet vanligvis returnerer så fort. Selv etter at bufferen er skrevet til disk, blir den ofte cached for senere lesing. Begge disse oppføringene, buffering og caching, krever minne selvfølgelig. Kernelutviklere, som er de smarte folkene som de er, bestemte seg for at det ville være en god ide å bruke hvilket brukerminne som er ledig, i stedet for å tildele minne direkte. Dette viser seg å være en utrolig nyttig ytelsesbooster, og det forklarer også hvorfor, uansett hvor mye minne du legger til et system, vil det vanligvis ende opp med å ha nesten null ledig minne etter å ha gjort en beskjeden mengde IO. Hvis brukerprogrammene ikke bruker det minnet, er kjernen din sannsynligvis. Ulempen med denne tilnærmingen er at dette gratis minnet kan tas bort fra kjernen i det øyeblikket et brukerrom applikasjon bestemmer det må tildele mer minne for seg selv. Kjernen har ikke annet valg enn å overgi det, og miste hva buffere kan ha vært der. Så hva betyr alt dette for Graphite? Vi har nettopp markert karbon s avhengighet av konsekvent lav IO latency, og vi vet også at skrivesystemanropet bare returnerer raskt fordi dataene bare kopieres til en buffer. Hva som skjer når det ikke er nok minne for at kjernen fortsetter å buffre skriver. Skriftene blir synkron og dermed veldig treg. Dette medfører en dramatisk nedgang i karbonens skriveoperasjoner, noe som fører til at karbon s køer vokser, noe som spiser enda mer minne, sultne kjernen enda lenger. Til slutt resulterer denne typen situasjon vanligvis i karbon som går tom for minne eller blir drept av en sint sysadmin. For å unngå denne typen katastrofe har jeg lagt til flere funksjoner for karbon, inkludert konfigurerbare grenser for hvor mange datapunkter som kan være i kø, og grenseverdier for hvor raskt ulike fluksoperasjoner kan utføres. Disse funksjonene kan beskytte karbon fra spiraling ut av kontroll og i stedet pålegge mindre harde effekter som å slippe noen datapunkter eller nekte å godta flere datapunkter. Imidlertid er de riktige verdiene for disse innstillingene systemsspesifikke og krever en hel del testing for å stille inn. De er nyttige, men de løser ikke fundamentalt problemet. For det, trenger du mer maskinvare. 7.9. Clustering Å gjøre flere grafitt servere ser ut til å være et enkelt system fra et brukerperspektiv, det er ikke veldig vanskelig, i hvert fall for en naiumlve-implementering. Webapps brukerinteraksjon består hovedsakelig av to operasjoner: finne beregninger og hente datapunkter (vanligvis i form av en graf). Finner og henter operasjoner av webappen er gjemt bort i et bibliotek som abstraherer implementeringen fra resten av kodebase, og de blir også eksponert gjennom HTTP-forespørselshåndterere for enkle fjernsamtaler. Finneoperasjonen søker det lokale filsystemet med hviske data for ting som samsvarer med et brukerdefinert mønster, akkurat som et filsystem glob som. txt matcher filer med den utvidelsen. Å være en trestruktur, er resultatet som returneres av finn en samling av Node-objekter, hver avledet fra enten gren - eller blad-underklassen i Node. Kataloger korresponderer med gren noder og hviske filer samsvarer med blad noder. Dette abstraksjonslaget gjør det enkelt å støtte ulike typer underliggende lagring, inkludert RRD-filer 5 og gisede hviskefiler. Leaf-grensesnittet definerer en hentemetode hvis implementering avhenger av typen av bladknutepunkt. I tilfelle av hviske filer er det rett og slett et tynt omslag rundt hviskebibliotekets egen hentefunksjon. Når klyngerstøtte ble lagt til, ble funnfunksjonen utvidet for å kunne foreta fjernhenting av anrop via HTTP til andre grafitt-servere spesifisert i webapps-konfigurasjonen. Node-dataene som finnes i resultatene av disse HTTP-anropene, blir pakket inn som RemoteNode-objekter som samsvarer med den vanlige noden. Branch. og Leaf-grensesnitt. Dette gjør clustering gjennomsiktig til resten av webapps kodebase. Hentemetoden for en ekstern bladknute er implementert som et annet HTTP-anrop for å hente dataene fra nodene Graphite-serveren. Alle disse samtalene blir gjort mellom webapps på samme måte som en klient ville kalle dem, bortsett fra en ekstra parameter som angir at operasjonen bare skal utføres lokalt og ikke omfordeles i hele klyngen. Når webappen blir bedt om å lage en graf, utfører den funnoperasjonen for å finne de forespurte beregningene og samtalene hentes på hver for å hente sine datapunkter. Dette fungerer om dataene er på den lokale serveren, eksterne servere eller begge deler. Hvis en server går ned, ringer fjernkontrollen tidsavbrudd ganske raskt, og serveren er merket som ute av bruk i en kort periode, der det ikke vil bli foretatt viderekoblinger. Fra et brukerperspektiv vil dataene på den tapte serveren mangle fra sine grafer, med mindre dataene dupliseres på en annen server i klyngen. 7.9.1. En kort analyse av Clustering Efficiency Den dyreste delen av en grafisk forespørsel gir gjengivelsen av grafen. Hver gjengivelse utføres av en enkelt server, slik at det å legge til flere servere øker kapasiteten til rendering av grafer effektivt. Det faktum at mange forespørsler slutter å distribuere finne anrop til hver annen server i klyngen, betyr imidlertid at vårt klyngesystem deler mye av den forreste belastningen i stedet for å spre den. Det vi har oppnådd på dette punktet, er imidlertid en effektiv måte å distribuere back-end-belastning, da hver karbon-forekomst opererer uavhengig. Dette er et godt første skritt siden det meste av tiden er baksiden en flaskehals langt før frontenden er, men tydeligvis vil den fremre delen ikke skalere horisontalt med denne tilnærmingen. For å gjøre frontendeskalaen mer effektiv, må antallet av eksterne søkeanrop gjort av webappen reduseres. Igjen er den enkleste løsningen caching. Akkurat som memcached allerede brukes til å cache datapunkter og gjengitte grafer, kan den også brukes til å cache resultatene av søkeforespørsler. Siden plasseringen av beregninger er mye mindre sannsynlig å endres ofte, bør dette vanligvis bufres lenger. Avviket fra å angi cache timeout for å finne resultater for lenge er imidlertid at nye beregninger som er lagt til i hierarkiet, kanskje ikke vises så raskt til brukeren. 7.9.2. Distribuere beregninger i en klynge Grafitt-webappen er ganske homogen i en klynge, ved at den utfører nøyaktig samme jobb på hver server. karbon s rolle, kan imidlertid variere fra server til server avhengig av hvilke data du velger å sende til hver forekomst. Ofte er det mange forskjellige kunder som sender data til karbon. så det ville være ganske irriterende å parre hver klient konfigurasjon med grafittklynger layout. Applikasjonsmålinger kan gå til en karbon-server, mens forretningsmessige data kan sendes til flere karbon servere for redundans. For å forenkle styringen av scenarier som dette, kommer Graphite med et ekstra verktøy kalt karbonrelé. Dens jobb er ganske enkel, det mottar metriske data fra klienter akkurat som standard karbondemoen (som faktisk kalles karbon-cache), men i stedet for lagring av data, gjelder det et sett med regler til de metriske navnene for å finne ut hvilke karbon-cache-servere å overføre dataene til. Hver regel består av et vanlig uttrykk og en liste over destinasjonsservere. For hvert mottatt datapunkt blir reglene evaluert i rekkefølge, og den første regelen hvis regelmessige uttrykk samsvarer med metrisk navn, brukes. På denne måten alle klientene trenger å gjøre, er å sende sine data til karbonreléet, og det vil ende opp på de riktige serverne. På en måte gir karbon-relé replikasjonsfunksjonalitet, selv om det mer nøyaktig vil bli kalt inngangsduplisering siden det ikke omhandler synkroniseringsproblemer. Hvis en server går ned midlertidig, vil den mangle datapunktene for tidsperioden der den var nede, men ellers fungerer normalt. Det er administrative skript som gir kontroll over omsynkroniseringsprosessen i systemadministratorens hender. 7.10. Design Refleksjoner Min erfaring med å jobbe med grafitt har bekreftet min tro på at skalerbarheten har svært lite å gjøre med lavt nivå ytelse, men i stedet er et produkt av generell design. Jeg har gått inn i mange flaskehalser underveis, men hver gang jeg ser etter forbedringer i design i stedet for fartfylling i ytelse. Jeg har blitt spurt mange ganger hvorfor jeg skrev Graphite i Python i stedet for Java eller C, og mitt svar er alltid at jeg ennå ikke har kommet over et sant behov for ytelsen som et annet språk kunne tilby. I Knu74 sa Donald Knuth famously at for tidlig optimalisering er roten til alt ondt. Så lenge vi antar at vår kode vil fortsette å utvikle seg på ikke-trivielle måter, er all optimalisering 6 på en eller annen måte for tidlig. En av Graphites største styrker og største svakheter er det faktum at svært lite av det faktisk var utformet i tradisjonell forstand. Grafitt utviklet seg gradvis gradvis, hindret av hinder, da det oppsto problemer. Mange ganger var forhindringene forutsigbare og ulike forebyggende løsninger virket naturlige. Det kan imidlertid være nyttig å unngå å løse problemer du egentlig ikke har, selv om det ser ut til å være sannsynlig at du snart vil. Årsaken er at du kan lære mye mer fra å studere faktiske feil, enn å teoretisere om overlegne strategier. Problemløsning er drevet av både empiriske dataene vi har for hånden og vår egen kunnskap og intuisjon. Jeg har funnet at det å tvile på din egen visdom nok kan tvinge deg til å se på dine empiriske data grundigere. For eksempel, da jeg først skrev hvisket, var jeg overbevist om at det ville bli omskrevet i C for fart og at min Python-implementering bare ville tjene som en prototype. Hvis jeg werent under en tidskrem, kan jeg veldig godt ha hoppet over Python-implementeringen helt. Det viser seg imidlertid at IO er en flaskehals så mye tidligere enn CPU at den mindre virkningsgraden av Python nesten ikke betyr noe i praksis. Som jeg sa, er den evolusjonære tilnærmingen også en stor svakhet av grafitt. Grensesnitt, det viser seg, ikke låne seg godt til gradvis utvikling. Et godt grensesnitt er konsekvent og bruker konvensjoner for å maksimere forutsigbarheten. Ved dette tiltaket er Graphites URL API for øyeblikket et sub-par-grensesnitt etter min mening. Valg og funksjoner har blitt klargjort over tid, noen ganger danner små øyer med konsistens, men generelt mangler en global følelse av konsistens. Den eneste måten å løse et slikt problem på er å gi versjonen av grensesnitt, men dette har også ulemper. Når et nytt grensesnitt er utformet, er den gamle fortsatt vanskelig å bli kvitt, dvelende rundt som evolusjonær bagasje som det menneskelige tillegget. Det kan virke harmløst nok til en dag din kode får appendittitt (dvs. en bug knyttet til det gamle grensesnittet) og du er nødt til å operere. Hvis jeg skulle endre en ting om grafitt tidlig, ville det vært å ta mye større forsiktighet i utformingen av eksterne APIer, tenker fremover istedenfor å utvikle dem litt etter litt. Et annet aspekt av grafitt som forårsaker litt frustrasjon er den begrensede fleksibiliteten til den hierarkiske metriske navngivningsmodellen. Selv om det er ganske enkelt og veldig praktisk for de fleste brukstilfeller, gjør det noen sofistikerte spørringer veldig vanskelig, til og med umulig å uttrykke. Da jeg først tenkte på å lage grafitt, visste jeg fra begynnelsen at jeg ønsket et redigerbart URL-API for å lage grafer 7. Mens jeg fortsatt er glad for at Graphite gir dette i dag, er jeg redd for at dette kravet har belastet API-en med overdreven enkel syntaks som gjør komplekse uttrykk ubrukelig. Et hierarki gjør problemet med å bestemme primærnøkkelen for en metrisk ganske enkel fordi en bane er i hovedsak en primærnøkkel for en knutepunkt i treet. Ulempen er at alle de beskrivende dataene (dvs. kolonnedata) må være innebygd direkte i banen. En potensiell løsning er å opprettholde den hierarkiske modellen og legge til en egen metadatabase for å aktivere mer avansert utvalg av beregninger med en spesiell syntaks. 7.11. Å bli åpen kilde Å se tilbake på utviklingen av grafitt, er jeg fortsatt overrasket både av hvor langt det har kommet som et prosjekt og hvor langt det har tatt meg som programmerer. Det startet som et kjæledyrprosjekt som bare var noen få hundre linjer med kode. Rendering-motoren startet som et eksperiment, bare for å se om jeg kunne skrive en. hviske ble skrevet i løpet av en helg ute av desperasjon for å løse et show-stopper problem før en kritisk startdato. karbon har blitt omskrevet flere ganger enn jeg bryr meg om å huske. Når jeg fikk lov til å frigjøre Graphite under en åpen kildekode lisens i 2008, forventet jeg aldri mye svar. Etter noen måneder ble det nevnt i en CNET-artikkel som ble hentet opp av Slashdot, og prosjektet plutselig tok av og har vært aktivt helt siden. I dag er det dusinvis av store og mellomstore bedrifter som bruker grafitt. Samfunnet er ganske aktivt og fortsetter å vokse. Langt fra å være et ferdig produkt, er det mye kjølig eksperimentelt arbeid som gjøres, noe som gjør det morsomt å jobbe med og fullt av potensial. launchpadgraphite Det er en annen port over hvilke serielle objekter kan sendes, noe som er mer effektivt enn plain-text formatet. Dette er kun nødvendig for svært høye trafikknivåer. memcached. org Solid state-stasjoner har generelt ekstremt hurtige søketider i forhold til konvensjonelle harddisker. RRD-filer er faktisk grennoder fordi de kan inneholde flere datakilder en RRD-datakilde er et bladknutepunkt. Knuth innebar spesielt lavnivåkodoptimalisering, ikke makroskopisk optimalisering som designforbedringer. Dette tvinger grafene selv til å være åpen kildekode. Alle kan bare se på en grafikk-URL for å forstå det eller endre detBSD Planet 24. februar 2017 Den andre utgivelseskandidaten til NetBSD 7.1 er nå tilgjengelig for nedlasting på: De av dere som foretrekker å bygge fra kilde, kan fortsette å følge netbsd-7 gren eller bruk netbsd-7-1-RC2-taggen. De fleste endringene gjort siden 7.1RC1 har vært sikkerhetsrettelser. Se srcdocCHANGES-7.1 for hele listen. Vennligst hjelp oss ved å teste 7.1RC2. Vi elsker alle tilbakemeldinger. Rapporter problemer gjennom de vanlige kanalene (send inn en PR eller skriv til den aktuelle listen). Mer generell tilbakemelding er velkommen på email160beskyttet 23. februar 2017 Mål: å bruke pkgcomp 2.0 til å bygge et binært arkiv av alle pakkene du er interessert i å holde lagringsplasset friskt hver dag, og å bruke det arkivet med pkgin for å opprettholde macOS systemet oppdatert og sikkert. Denne opplæringen er spesifikt rettet mot macOS og er avhengig av MacOS-spesifikke selvinstallasjonspakken. For en mer generell opplæring som bruker pkgcomp-cron-pakken i pkgsrc, se Hold NetBSD oppdatert med pkgcomp 2.0. Komme i gang Først last ned og installer den frittstående macOS installasjonspakken. For å finne den riktige filen, naviger til utgivelsessiden på GitHub. velg den nyeste versjonen, og last ned filen med et navn på skjemaet pkgcomp-ltversiongt-macos. pkg. Dobbeltklikk deretter på filen du lastet ned, og følg installeringsinstruksjonene. Du vil bli bedt om administratorpassordet fordi installatøren må legge inn filer under usrlocal notat at pkgcomp krever root privilegier uansett å kjøre (fordi det bruker chroot (8) internt), så du må gi tillatelse til et eller annet tidspunkt. Installasjonsprogrammet endrer standard PATH (ved å opprette etcpaths. dpkgcomp) for å inkludere pkgcomps eget installasjons katalog og pkgsrcs installasjons prefiks. Start om shell-øktene dine for å gjøre denne endringen effektiv, eller oppdatere dine egne oppstartsskript i shell hvis du ikke bruker standardene. Lastly, make sure to have Xcode installed in the standard ApplicationsXcode. app location and that all components required to build command-line apps are available. Tip: try running cc from the command line and seeing if it prints its usage message. Adjusting the configuration The macOS flavor of pkgcomp is configured with an installation prefix of usrlocal. which means that the executable is located in usrlocalsbinpkgcomp and the configuration files are in usrlocaletcpkgcomp. This is intentional to keep the pkgcomp installation separate from your pkgsrc installation so that it can run no matter what state your pkgsrc installation is in. The configuration files are as follows: usrlocaletcpkgcompdefault. conf. This is pkgcomps own configuration file and the defaults configured by the installer should be good to go for macOS. In particular, packages are configured to go into optpkg instead of the traditional usrpkg. This is a necessity because the latter is not writable starting with OS X El Capitan thanks to System Integrity Protection (SIP). usrlocaletcpkgcompsandbox. conf. This is the configuration file for sandboxctl, which is the support tool that pkgcomp uses to manage the compilation sandbox. The default settings configured by the installer should be good. usrlocaletcpkgcompextra. mk. conf. This is pkgsrcs own configuration file. In here, you should configure things like the licenses that are acceptable to you and the package-specific options youd like to set. You should not configure the layout of the installed files (e. g. LOCALBASE ) because thats handled internally by pkgcomp as specified in default. conf . usrlocaletcpkgcomplist. txt. This determines the set of packages you want to build automatically (either via the auto command or your periodic cron job). The automated builds will fail unless you list at least one package. Make sure to list pkgin here to install a better binary package management tool. Youll find this very handy to keep your installation up-to-date. Note that these configuration files use the varpkgcomp directory as the dumping ground for: the pkgsrc tree, the downloaded distribution files, and the built binary packages. We will see references to this location later on. The cron job The installer configures a cron job that runs as root to invoke pkgcomp daily. The goal of this cron job is to keep your local packages repository up-to-date so that you can do binary upgrades at any time. You can edit the cron job configuration interactively by running sudo crontab - e . This cron job wont have an effect until you have populated the list. txt file as described above, so its safe to let it enabled until you have configured pkgcomp. If you want to disable the periodic builds, just remove the pkgcomp entry from the crontab. On slow machines, or if you are building a lot of packages, you may want to consider decreasing the build frequency from daily to weekly . Sample configuration Here is what the configuration looks like on my Mac Mini as dumped by the config subcommand. Use this output to get an idea of what to expect. Ill be using the values shown here in the rest of the tutorial: Building your own packages by hand Now that you are fully installed and configured, youll build some stuff by hand to ensure the setup works before the cron job comes in. The simplest usage form, which involves full automation and assumes you have listed at least one package in list. txt. is something like this: This trivially-looking command will: clone or update your copy of pkgsrc create the sandbox bootstrap pkgsrc and pbulk use pbulk to build the given packages and destroy the sandbox. After a successful invocation, youll be left with a collection of packages in the varpkgcomppackages directory. If youd like to restrict the set of packages to build during a manually-triggered build, provide those as arguments to auto. This will override the contents of AUTOPACKAGES (which was derived from your list. txt file). But what if you wanted to invoke all stages separately, bypassing auto. The command above would be equivalent to: Go ahead and play with these. You can also use the sandbox-shell command to interactively enter the sandbox. See pkgcomp(8) for more details. Lastly note that the root user will receive email messages if the periodic pkgcomp cron job fails, but only if it fails. That said, you can find the full logs for all builds, successful or not, under varpkgcomplog . Installing the resulting packages Now that you have built your first set of packages, you will want to install them. This is easy on macOS because you did not use pkgsrc itself to install pkgcomp. First, unpack the pkgsrc installation. You only have to do this once: Thats it. You can now install any packages you like: The command above assume you have restarted your shell to pick up the correct path to the pkgsrc installation. If the call to pkgadd fails because of a missing binary, try restarting your shell or explicitly running the binary as optpkgsbinpkgadd . Keeping your system up-to-date Thanks to the cron job that builds your packages, your local repository under varpkgcomppackages will always be up-to-date you can use that to quickly upgrade your system with minimal downtime. Assuming you are going to use pkgtoolspkgin as recommended above (and why not), configure your local repository: And, from now on, all it takes to upgrade your system is: February 22, 2017 At the obvious risk of this post getting downvoted and eventually closed as too biasedopionated, Id nevertheless ask this question. The NetBSD projects tagline is of course, it runs NetBSD. I understand that one of the main goals is to run on every possible hardware out there (pages on the internet are full of possible hyperbole, such as anything with a computing chip in it, even a toaster shall run NetBSD). However, if you examine the webpages of IoT hardware from mid-2010s, there is poor visibility of NetBSD as the first choice of OS. Eg. on the Raspberry Pi, Raspbian OS is regarded as the go-to starter OS. Arduinos Wikipedia page says that it runs either Windows, macOS or Linux. Snappy Ubuntu-Core and even Win10 IoT (gasp) are staking a claim as leading OSes in the IoT market. While I understand that the last two OSes mentioned above have corporate muscle-power behind them, even open-source job requirement listings do not place much emphasis on NetBSD expertise. The question distills down to: Why is NetBSD not considered the first-rate choice in these IoT hardware. This seems as an anti-pattern given the projects canonical goals All of a sudden (read: without changing any parameters) my netbsd virtualmachine started acting oddly. The symptoms concern ssh tunneling. From my laptop I launch: Then, in another shell: The ssh debug says: I tried also with localhost:80 to connect to the (remote) web server, with identical results. The remote host runs NetBSD: I am a bit lost. I tried running tcpdump on the remote host, and I spotted these bad chksum: I tried restarting the ssh daemon to no avail. I havent rebooted yet - perhaps somebody here can suggest other diagnostics. I think it might either be the virtual network card driver, or somebody rooted our ssh. February 20, 2017 Introduction I have been working on and off for almost a year trying to get reproducible builds (the same source tree always builds an identical cdrom) on NetBSD. I did not think at the time it would take as long or be so difficult, so I did not keep a log of all the changes I needed to make. I was also not the only one working on this. Other NetBSD developers have been making improvements for the past 6 years. I would like to acknowledge the NetBSD build system (aka build. sh ) which is a fully portable cross-build system. This build system has given us a head-start in the reproducible builds work. I would also like to acknowledge the work done by the Debian folks who have provided a platform to run, test and analyze reproducible builds. Special mention to the diffoscope tool that gives an excellent overview of whats different between binary files, by finding out what they are (and if they are containers what they contain) and then running the appropriate formatter and diff program to show whats different for each file. Finally other developers who have started, motivated and did a lot of work getting us here like Joerg Sonnenberger and Thomas Klausner for their work on reproducible builds, and Todd Vierling and Luke Mewburn for their work on build. sh. Sources of difference Heres is what we found that we needed to fix, how we chose to fix it and why, and where are we now. There are many reasons why two separate builds from the same sources can be different. Heres an (incomplete) list: timestamps Many things like to keep track of timestamps, specially archive formats ( tar(1) . ar(1) ), filesystems etc. The way to handle each is different, but the approach is to make them either produce files with a 0 timestamp (where it does not matter like ar), or with a specific timestamp when using 0 does not make sense (it is not useful to the user). datestimesauthors etc. embedded in source files Some programs like to report the datetime they were built, the author, the system they were built on etc. This can be done either by programmatically finding and creating source files containing that information during build time, or by using standard macros such as DATE, TIME etc. Usually putting a constant time or eliding the information (such as we do with kernels and bootblocks) solves the problem. timezone sensitive code Certain filesystem formats (iso 9660 etc.) dont store raw timestamps but formatted times to achieve this they convert from a timestamp to localtime, so they are affected by the timezone. directory orderbuild order The build order is not constant especially in the presence of parallel builds neither is directory scan order. If those are used to create output files, the output files will need to be sorted so they become consistent. non-sanitized data stored into files Writing data structures into raw files can lead to problems. Running the same program in different operating systems or using ASLR makes those issues more obvious. symbolic linkspaths Having paths embedded into binaries (specially for debugging information) can lead to binary differences. Propagation of the logical path can prove problematic. general tool inconsistencies gcc(1) profiling uses a PROFILEHOOK macro on RISC targets that utilizes the current function number to produce labels. Processing order of functions is not guaranteed. gpt(8) creation involves uuid generation these are generally random. block allocation on msdos filesystems had a random component. makefs(8) uses timezones with timestamps (iso9660 ), randomness for block selection (msdos ), stores stray pointers in superblock (ffs ). Every program that is used to generate other output needs to have consistent results. In NetBSD this is done with build. sh. which builds a set of tools from known sources before it can use those tools to build the rest of the system). There is a large number of tools. There are also internal issues with the tools that make their output non reproducible, such as nondeterministic symbol creation or capturing parts of the environment in debugging information. build information tunables environment There are many environment settings, or build variable settings that can affect the build. This needs to be kept constant across builds so weve changed the list of variables that are reported in Makefile. params. making sure that the source tree has no local changes Variables controlling reproducible builds Reproducible builds are controlled on NetBSD with two variables: MKREPRO (which can be set to yes or no) and MKREPROTIMESTAMP which is used to set the timestamp of the builds artifacts. This is usually set to the number of seconds from the epoch. The build. sh - P flag handles reproducible builds automatically: sets the MKREPRO variable to yes, and then finds the latest source file timestamp in the tree and sets MKREPROTIMESTAMP to that. Handling timestamps The first thing that we needed to understand was how to deal with timestamps. Some of the timestamps are not very useful (for example inside random ar archives) so we choose to 0 them out. Others though become annoying if they are all 0. What does it mean when you mount install media and all the dates on the files are Jan 1, 1970 We decided that a better timestamp would be the timestamp of the most recently modified file in the source tree. Unfortunately this was not easy to find on NetBSD, because we are still using CVS as the source control system, and CVS does not have a good way to provide that. For that we wrote a tool called cvslatest. that scans the CVS metadata files (CVSEntries) and finds the latest commit. This works well for freshly checked out trees (since CVS uses the source timestamp when checking out), but not with updated trees (because CVS uses the current time when updating files, so that make(1) thinks theyve been modified). To fix that, weve added a new flag to the cvs(1) update command - t . that uses the source checkout time. The build system needs now to evaluate the tree for the latest file running cvslatest(1) and find the latest timestamp in seconds from the Epoch which is set in the MKREPROTIMESTAMP variable. This is the same as SOURCEDATEEPOCH. Various Makefiles are using this variable and MKRERPO to determine how to produce consistent build artifacts. For example many commands ( tar(1) . makefs(8) . gpt(8) . ) have been modified to take a --timestamp or - T command line switch to generate output files that use the given timestamp, instead of the current time. Other software (am-utils, acpica, bootblocks, kernel) used DATE or TIME, or captured the user, machine, etc. from the environment and had to be changed to a constant time, user, machine, etc. roff(7) documents used the td macro to generate the date of formatting in the document have been changed to conditionally use the macro based on register R . for example as in intro. me and then the Makefile was changed to set that register for MKREPRO. Handling Order We dont control the build order of things and we also dont control the directory order which can be filesystem dependent. The collation order also is environment specific, and sorting needs to be stable (we have not encountered that problem yet). Two different programs caused us problems here: file(1) with the generation of the compiled magic file using directory order (fixed by changing file(1) ). install-info(1) . texinfo(5) files that have no specific order. For that we developed another tool called sortinfo(1) that sorts those files as a post-process step. Fortunately the filesystem builders and tar programs usually work with input directories that appear to have a consistent order so far, so we did not have to fix things there. Permissions NetBSD already keeps permissions for most things consistent in different ways: the build system uses install(8) and specifies ownership and mode. the mtree(8) program creates build artifacts using consistent ownership and permissions. Nevertheless, the various architecture-specific distribution media installers used cp(1) mkdir(1) and needed to be corrected. Most of the issues found had to do with capturing the environment in debugging information. The two biggest issues were: DWATProducer and DWATcompdir . Here you see two changes we made for reproducible builds: We chose to allow variable names (and have gcc(1) expand them) for the source of the prefix map because the source tree location can vary. Others have chosen to skip - fdebug-prefix-map from the variables to be listed. We added - fdebug-regex-map so that we could handle the NetBSD specific objdir build functionality. Object directories can have many flavors in NetBSD so it was difficult to use - fdebug-prefix-map to capture that. DWATcompdir presented a different challenge. We got non-reproducibility when building on paths where either the source or the object directories contained symbolic links. Although gcc(1) does the right thing handling logical paths (respects PWD), we found that there were problems both in the NetBSD sh(1) (fixed here ) and in the NetBSD make(1) (fixed here ). Unfortunately we cant depend on the shell to obey the logical path so we decided to go with: This works because make(1) is a tool (part of the toolchain we provide) whereas sh(1) is not. Another weird issue popped up on sparc64 where a single file in the whole source tree does not build reproducibly. This file is asn1krb5asn1.c which is generated in here. The problem is that when profiling on RISC machines gcc uses the PROFILEHOOK macro which in turn uses the function number to generate labels. This number is assigned to each function in a source file as it is being compiled. Unfortunately this number is not deterministic because of optimization (a bug), but fortunately turning optimization off fixes the problem. Status and future work As of 2017-02-20 we have fully reproducible builds on amd64 and sparc64. We are planning to work on the following areas: Vary more parameters on the system build (filesystem types, build OSs) Verify that cross building is reproducible Verify that unprivileged builds work Test on all the platforms February 19, 2017 At the second annual PillarCon. I facilitated a workshop called Fundamentals of C and Embedded using Mob Programming. On a Mac, we test-drove toggling a Raspberry Pis onboard LED. Before and after Before: ACT LED off Here are the takeaways we wrote down: Could test return type of main() Why wasnt numcalls 0 to begin with Maybe provide the mocks in advance (maybe use CMock ) Fun idea: fake GPIO device Vim tricks Cool But maybe use an easier editor for target audience Appropriate amount of effort need bigger payoff Mob programming supported the learning processobjective My own thoughts for next time I do this material: Try: providing the mocks in the starting state Keep: providing multi-target Makefile and prebuilt cross compiler Try: using a more discoverable (e. g. non-modal) text editor Keep: being prepared with a test list Try: providing already-written test cases to uncomment one at a time (one of the aspects of James Grennings training course I especially loved) Keep: being prepared with corners to cut if time gets short Try: knowing more of the mistakes we might make when cutting corners Keep: mobbing Participants who already knew some of this stuff liked th e mobbing (new to some of them) and appreciated how I structured the material to unfold. Participants who were new to C andor embedded (my target audience) came away feeling that they neednt be intimidated by it, and that programming in this context can be as fun and feedbacky as theyre accustomed to. Play along at home Then follow the steps outlined in the README . Further learning Youre welcome to use the workshop materials for any purpose, including your own workshop. If you do, Id love to hear about it. Or if youd like me to come facilitate it for your company, meetup group, etc. lets talk. February 18, 2017 This is a tutorial to guide you through the shiny new pkgcomp 2.0 on NetBSD. Goals: to use pkgcomp 2.0 to build a binary repository of all the packages you are interested in to keep the repository fresh on a daily basis and to use that repository with pkgin to maintain your NetBSD system up-to-date and secure. This tutorial is specifically targeted at NetBSD but should work on other platforms with some small changes. Expect, at the very least, a macOS-specific tutorial as soon as I create a pkgcomp standalone installer for that platform. Getting started First install the sysutilssysbuild-user package and trigger a full build of NetBSD so that you get usable release sets for pkgcomp. See sysbuild(1) and pkginfo sysbuild-user for details on how to do so. Alternatively, download release sets from the FTP site and later tell pkgcomp where they are. Then install the pkgtoolspkgcomp-cron package. The rest of this tutorial assumes you have done so. Adjusting the configuration To use pkgcomp for periodic builds, youll need to do some minimal edits to the default configuration files. The files can be found directly under varpkgcomp. which is pkgcomp-cron s home: varpkgcomppkgcomp. conf. This is pkgcomps own configuration file and the defaults installed by pkgcomp-cron should be good to go. The contents here are divided in three major sections: declaration on how to download pkgsrc, definition of the file system layout on the host machine, and definition of the file system layout for the built packages. You may want to customize the target system paths, such as LOCALBASE or SYSCONFDIR. but you should not have to customize the host system paths. varpkgcompsandbox. conf. This is the configuration file for sandboxctl. The default settings installed by pkgcomp-cron should suffice if you used the sysutilssysbuild-user package as recommended otherwise tweak the NETBSDNATIVERELEASEDIR and NETBSDSETSRELEASEDIR variables to point to where the downloaded release sets are. varpkgcompextra. mk. conf. This is pkgsrcs own configuration file. In here, you should configure things like the licenses that are acceptable to you and the package-specific options youd like to set. You should not configure the layout of the installed files (e. g. LOCALBASE ) because thats handled internally by pkgcomp as specified in pkgcomp. conf . varpkgcomplist. txt. This determines the set of packages you want to build in your periodic cron job. The builds will fail unless you list at least one package. WARNING: Make sure to include pkgcomp-cron and pkgin in this list so that your binary kit includes these essential package management tools. Otherwise youll have to deal with some minor annoyances after rebootstrapping your system. Lastly, review roots crontab to ensure the job specification for pkgcomp is sane. On slow machines, or if you are building many packages, you will probably want to decrease the build frequency from daily to weekly . Sample configuration Here is what the configuration looks like on my NetBSD development machine as dumped by the config subcommand. Use this output to get an idea of what to expect. Ill be using the values shown here in the rest of the tutorial: Building your own packages by hand Now that you are fully installed and configured, youll build some stuff by hand to ensure the setup works before the cron job comes in. The simplest usage form, which involves full automation, is something like this: This trivially-looking command will: checkout or update your copy of pkgsrc create the sandbox bootstrap pkgsrc and pbulk use pbulk to build the given packages and destroy the sandbox. After a successful invocation, youll be left with a collection of packages in the directory you set in PACKAGES. which in the default pkgcomp-cron installation is varpkgcomppackages . If youd like to restrict the set of packages to build during a manually-triggered build, provide those as arguments to auto. This will override the contents of AUTOPACKAGES (which was derived from your list. txt file). But what if you wanted to invoke all stages separately, bypassing auto. The command above would be equivalent to: Go ahead and play with these. You can also use the sandbox-shell command to interactively enter the sandbox. See pkgcomp(8) for more details. Lastly note that the root user will receive email messages if the periodic pkgcomp cron job fails, but only if it fails. That said, you can find the full logs for all builds, successful or not, under varpkgcomplog . Installing the resulting packages Now that you have built your first set of packages, you will want to install them. On NetBSD, the default pkgcomp-cron configuration produces a set of packages for usrpkg so you have to wipe your existing packages first to avoid build mismatches. WARNING: Yes, you really have to wipe your packages. pkgcomp currently does not recognize the package tools that ship with the NetBSD base system (i. e. it bootstraps pkgsrc unconditionally, including bmake ), which means that the newly-built packages wont be compatible with the ones you already have. Avoid any trouble by starting afresh. To clean your system, do something like this: Now, rebootstrap pkgsrc and reinstall any packages you previously had: Finally, reconfigure any packages where you had have previously made custom edits. Use the backup in rootetc. old to properly update the corresponding files in etc. I doubt you made a ton of edits so this should be easy. IMPORTANT: Note that the last command in this example includes pkgin and pkgcomp-cron. You should install these first to ensure you can continue with the next steps in this tutorial. Keeping your system up-to-date If you paid attention when you installed the pkgcomp-cron package, you should have noticed that this configured a cron job to run pkgcomp daily. This means that your packages repository under varpkgcomppackages will always be up-to-date so you can use that to quickly upgrade your system with minimal downtime. Assuming you are going to use pkgtoolspkgin (and why not), configure your local repository: And, from now on, all it takes to upgrade your system is: Lots of storage this week. February 17, 2017 After many (many) years in the making, pkgcomp 2.0 and its companion sandboxctl 1.0 are finally here Read below for more details on this launch. I will publish detailed step-by-step tutorials on setting up periodic package rebuilds in separate posts. What are these tools pkgcomp is an automation tool to build pkgsrc binary packages inside a chroot-based sandbox. The main goal is to fully automate the process and to produce clean and reproducible packages. A secondary goal is to support building binary packages for a different system than the one doing the builds: e. g. building packages for NetBSDi386 6.0 from a NetBSDamd64 7.0 host. The highlights of pkgcomp 2.0 . compared to the 1.x series, are: multi-platform support . including NetBSD, FreeBSD, Linux, and macOS use of pbulk for efficient builds management of the pkgsrc tree itself via CVS or Git and a more robust and modern codebase . sandboxctl is an automation tool to create and manage chroot-based sandboxes on a variety of operating systems . sandboxctl is the backing tool behind pkcomp. sandboxctl hides the details of creating a functional chroot sandbox on all supported operating systems in some cases, like building a NetBSD sandbox using release sets, things are easy but in others, like on macOS, they are horrifyingly difficult and brittle. Storytelling time pkgcomps history is a long one. pkgcomp 1.0 first appeared in pkgsrc on September 6th, 2002 as the pkgtoolspkgcomp package in pkgsrc. As of this writing, the 1.x series are at version 1.38 and have received contributions from a bunch of pkgsrc developers and external users even more, the tool was featured in the BSD Hacks book back in 2004. This is a long time for a shell script to survive in its rudimentary original form: pkgcomp 1.x is now a teenager at its 14 years of age and is possibly one of my longest-living pieces of software still in use. Motivation for the 2.x rewrite For many of these years, I have been wanting to rewrite pkgcomp to support other operating systems. This all started when I first got a Mac in 2005, at which time pkgsrc already supported Darwin but there was no easy mechanism to manage package updates. What would happenand still happens to this dayis that, once in a while, Id realize that my packages were out of date (read: insecure) so Id wipe the whole pkgsrc installation and start from scratch. Very inconvenient I had to automate that properly. Thus the main motivation behind the rewrite was primarily to support macOS because this was, and still is, my primary development platform. The secondary motivation came after writing sysbuild in 2012, which trivially configured daily builds of the NetBSD base system from cron I wanted the exact same thing for my packages. One, two no, three rewrites The first rewrite attempt was sometime in 2006, soon after I learned Haskell in school. Why Haskell Just because that was the new hotness in my mind and it seemed like a robust language to drive a pretty tricky automation process. That rewrite did not go very far, and thats possibly for the better: relying on Haskell would have decreased the portability of the tool, made it hard to install it, and guaranteed to alienate contributors. The second rewrite attempt started sometime in 2010, about a year after I joined Google as an SRE. This was after I became quite familiar with Python at work, wanting to use the language to rewrite this tool. That experiment didnt go very far though, but I cant remember why probably because I was busy enough at work and creating Kyua. The third and final rewrite attempt started in 2013 while I had a summer intern and I had a little existential crisis. The year before I had written sysbuild and shtk. so I figured recreating pkgcomp using the foundations laid out by these tools would be easy. And it was to some extent. Getting the barebones of a functional tool took only a few weeks, but that code was far from being stable, portable, and publishable. Life and work happened, so this fell through the cracks until late last year, when I decided it was time to close this chapter so I could move on to some other project ideas. To create the focus and free time required to complete this project, I had to shift my schedule to start the day at 5am instead of 7amand, many weeks later, the code is finally here and Im still keeping up with this schedule. Granted: this third rewrite is not a fancy one, but it wasnt meant to be. pkgcomp 2.0 is still written in shell, just as 1.x was, but this is a good thing because bootstrapping on all supported platforms is easy. I have to confess that I also considered Go recently after playing with it last year but I quickly let go of that thought: at some point I had to ship the 2.0 release, and 10 years since the inception of this rewrite was about time. The launch of 2.0 On February 12th, 2017, the authoritative sources of pkgcomp 1.x were moved from pkgtoolspkgcomp to pkgtoolspkgcomp1 to make room for the import of 2.0. Yes, the 1.x series only existed in pkgsrc and the 2.x series exist as a standalone project on GitHub . And here we are. Today, February 17th, 2017, pkgcomp 2.0 saw the light Why sandboxctl as a separate tool sandboxctl is the supporting tool behind pkgcomp, taking care of all the logic involved in creating chroot-based sandboxes on a variety of operating systems. Some are easy, like building a NetBSD sandbox using release sets, and others are horrifyingly difficult like macOS. In pkgcomp 1.x, this logic used to be bundled right into the pkgcomp code, which made it pretty much impossible to generalize for portability. With pkgcomp 2.x, I decided to split this out into a separate tool to keep responsibilities isolated. Yes, the integration between the two tools is a bit tricky, but allows for better testability and understandability. Lastly, having sandboxctl as a standalone tool, instead of just a separate code module, gives you the option of using it for your own sandboxing needs. I know, I know the world has moved onto containerization and virtual machines, leaving chroot-based sandboxes as a very rudimentary thing but thats all weve got in NetBSD, and pkgcomp targets primarily NetBSD. Note, though, that because pkgcomp is separate from sandboxctl, there is nothing preventing adding different sandboxing backends to pkgcomp. Installation Installation is still a bit convoluted unless you are on one of the tier 1 NetBSD platforms or you already have pkgsrc up and running. For macOS in particular, I plan on creating and shipping a installer image that includes all of pkgcomp dependenciesbut I did not want to block the first launch on this. For now though, you need to download and install the latest source releases of shtk. sandboxctl. and pkgcomp in this order pass the --with-atfno flag to the configure scripts to cut down the required dependencies. On macOS, you will also need OSXFUSE and the bindfs file system. If you are already using pkgsrc, you can install the pkgtoolspkgcomp package to get the basic tool and its dependencies in place, or you can install the wrapper pkgtoolspkgcomp-cron package to create a pre-configured environment with a daily cron job to run your builds. See the packages MESSAGE (with pkginfo pkgcomp-cron ) for more details. Documentation Both pkgcomp and sandboxctl are fully documented in manual pages. See pkgcomp(8). sandboxctl(8). pkgcomp. conf(5) and sandbox. conf(5) for plenty of additional details. As mentioned at the beginning of the post, I plan on publishing one or more tutorials explaining how to bootstrap your pkgsrc installation using pkgcomp on, at least, NetBSD and macOS. Stay tuned. And, if you need support or find anything wrong, please let me know by filing bugs in the corresponding GitHub projects: jmmvpkgcomp and jmmvsandboxctl . February 16, 2017 I claim an IPv6 address using ifconfig in a script. This address is then immediately used to listen on a TCP port. When I write the script like this, it fails because the service is unable to listen: However, it succeeds when I do it like this: I tried writing the output of ifconfig directly after running the add - operation. It appears that ifconfig reports the IP-address as being tentative . which apparently prevents a service from listening on it. Naturally, waiting exactly one second and hoping that the address has become available is not a very good way to handle this. How can I wait for a tentative address to become available, or make ifconfig return later so that the address is all set up I finally registered, have been reading the forum for years. Ill simply copy this from LQ. Already have written to a couple of lists (including netbsd-users) but without results. Running 7.0.2 with out of the box kernel. All my GTK2 apps segfault on keyboard input. lxappearance for example, when looking for a theme you can start pressing keys and it will search. But in my case it dumps core with usrliblibpthread. so.1 . usrliblibc. so.12 and usrpkgliblibXcursor. so.1 . The same thing happens when typing something into a GTK2 text editor, leafpad, or looking for something in CtrlO window in firefox or gimp or any other programme. gimp cant even run inside gdb because of: Program received signal SIGTRAP, Tracebreakpoint trap. 0x00007f7fea49f6aa in lwppark60 () from usrliblibc. so.12 (gdb) bt 0 0x00007f7fea49f6aa in lwppark60 () from usrliblibc. so.12 1 0x00007f7fec808f2b in pthreadcondtimedwait () from usrliblibpthread. so.1 2 0x00007f7feb880b80 in gcondwait () from usrpkgliblibglib-2.0.so.0 3 0x00007f7feb81d7cd in gasyncqueuepopinternunlocked () from usrpkgliblibglib-2.0.so.0 4 0x00007f7feb86742f in gthreadpoolthreadproxy () from usrpkgliblibglib-2.0.so.0 5 0x00007f7feb866a7d in gthreadproxy () from usrpkgliblibglib-2.0.so.0 6 0x00007f7fec80a9cc in. () from usrliblibpthread. so.1 7 0x00007f7fea483de0 in. () from usrliblibc. so.12 8 0x0000000000000000 in. () Firefox also has problems in libc. so.12 and libpthread. so.1 but doesnt say about lwppark60. It also cant run inside gdb. lxappearance also dumps core when clicking Apply after changing something (themes, cursor or icon themes, fonts etc.) with another output: 0 0x00007f7fefcb27ba in. () from usrliblibc. so.12 1 0x00007f7fefcb2bc7 in malloc () from usrliblibc. so.12 2 0x00007f7ff1849782 in gmalloc () from usrpkgliblibglib-2.0.so.0 3 0x00007f7ff185ef1c in gmemdup () from usrpkgliblibglib-2.0.so.0 4 0x00007f7ff18356b8 in ghashtableinsertnode () from usrpkgliblibglib-2.0.so.0 5 0x00007f7ff1835823 in ghashtableinsertinternal () from usrpkgliblibglib-2.0.so.0 6 0x00007f7ff183ccb1 in gkeyfileflushparsebuffer () from usrpkgliblibglib-2.0.so.0 7 0x00007f7ff183cf62 in gkeyfileparsedata () from usrpkgliblibglib-2.0.so.0 8 0x00007f7ff183d0e1 in gkeyfileloadfromfd () from usrpkgliblibglib-2.0.so.0 9 0x00007f7ff183d99e in gkeyfileloadfromfile () from usrpkgliblibglib-2.0.so.0 10 0x0000000000405532 in start () Apart from these programmes I receive SIGILL in mplayer when trying to play videos. Backtrace doesnt tell anything useful. sxiv, an image viewer, segfaults with this: 0 0x00007f7ff64b209f in. () from usrliblibc. so.12 1 0x00007f7ff64b3983 in free () from usrliblibc. so.12 2 0x000000000040729c in removefile () 3 0x0000000000409a92 in main () Previously, if built from local pkgsrc tree it worked but now stopped working at all at all. mpg321 dumps core and says Memory fault with this backtrace: 0 0x00007f7ff78068b1 in sempost () from usrliblibpthread. so.1 1 0x000000000040afe0 in. () 2 0x0000000000403695 in. () 3 0x00007f7ff7ffa000 in. () 4 0x0000000000000002 in. () 5 0x00007f7ffffffdb0 in. () 6 0x00007f7ffffffdb7 in. () 7 0x0000000000000000 in. () I did memtests, once for four hours (two passes) and once for eight hours (eight passes). I did Dells ePSA tests (diagnostic utility accessed from BIOS), it has own memtest, apart from monitoring the hard drive, the power supply, the keyboard, the fans, the CPU all of them returned no errors. I rebuilt gtk2 with debug symbols but it changed nothing. On LQ it was suggested that I have hardware problems, but I am not convinced. Every programme described above worked inside Ubuntu LiveUSB and Void Linux LiveUSB on the same machine (picked because they have different libcs). Before I had NetBSD with X11 a couple of months ago (and earlier) and I didnt have these errors. In the Interwebs I found similar messages on Arch forum and Launchpad. Is there a need for a 24 hour memtest Should I just remove each of the two memory modules and try Is it hardware related after all Thanks everyone for any kind of help. February 14, 2017 The LLVM project is a quickly moving target, this also applies to the LLVM debugger -- LLDB. Its actively used in several first-class operating systems, while - thanks to my spare time dedication - NetBSD joined the LLDB club in 2014, only lately the native support has been substantially improved and the feature set is quickly approaching the support level of Linux and FreeBSD. During this work 12 patches were committed to upstream, 12 patches were submitted to review, 11 new ATF were tests added, 2 NetBSD bugs filed and several dozens of commits were introduced in pkgsrc-wip, reducing the local patch set to mostly Native Process Plugin for NetBSD. What has been done in NetBSD 1. Triagged issues of ptrace(2) in the DTraceNetBSD support Chuck Silvers works on improving DTrace in NetBSD and he has detected an issue when tracer signals are being ignored in libproc . The libproc library is a compatibility layer for DTrace simulating proc capabilities on the SunOS family of systems. Ive verified that the current behavior of signal routing is incorrect. The NetBSD kernel correctly masks signals emitted by a tracee, not routing them to its tracer. On the other hand the masking rules in the inferior process blacklists signals generated by the kernel, which is incorrect and turns a debugger into a deaf listener. This is the case for libproc as signals were masked and software breakpoints triggering INT3 on i386 amd64 CPUs and SIGTRAP with TRAPBRKP sicode wasnt passed to the tracer. This isnt limited to turning a debugger into a deaf listener, but also a regular execution of software breakpoints requires: rewinding the program counter register by a single instruction, removing trap instruction and restoring the original instruction. When an instruction isnt restored and further code execution is pretty randomly affected, it resulted in execution anomalies and breaking of tracee. A workaround for this is to disable signal masking in tracee. Another drawback inspired by the DTrace code is to enhance PTSYSCALL handling by introducing a way to distinguish syscall entry and syscall exit events. Im planning to add dedicated sicodes for these scenarios. While there, there are users requesting PTSTEP and PTSYSCALL tracing at the same time in an efficient way without involving heuristcs. Ive filed the mentioned bug: Ive added new ATF tests: Verify that masking single unrelated signal does not stop tracer from catching other signals Verify that masking SIGTRAP in tracee stops tracer from catching this raised signal Verify that masking SIGTRAP in tracee does not stop tracer from catching software breakpoints Verify that masking SIGTRAP in tracee does not stop tracer from catching single step trap Verify that masking SIGTRAP in tracee does not stop tracer from catching exec() breakpoint Verify that masking SIGTRAP in tracee does not stop tracer from catching PTRACEFORK breakpoint Verify that masking SIGTRAP in tracee does not stop tracer from catching PTRACEVFORK breakpoint Verify that masking SIGTRAP in tracee does not stop tracer from catching PTRACEVFORKDONE breakpoint Verify that masking SIGTRAP in tracee does not stop tracer from catching PTRACELWPCREATE breakpoint Verify that masking SIGTRAP in tracee does not stop tracer from catching PTRACELWPEXIT breakpoint 2. EL F Auxiliary Vectors The ELF file format permits to transfer additional information for a process with a dedicated container of properties, its named ELF Auxilary Vector . Every system has its dedicated way to read this information in a debugger from a tracee. The NetBSD approach is to transfer this vector with a ptrace (2) API PIODREADAUXV . Our interface shares the API with OpenBSD. I filed a bug that our interface returns vector size of 8496 bytes, while OpenBSD has constant 64 bytes. It was diagnosed and fixed by Christos Zoluas that we were incorrectly counting bits and bytes and this enlarged the data streamlined. The bug was harmless and had no known side-effects besides large chunk of zeroed data. There is also a prepared local patch extending NetBSD platform support to read information for this vector, its primarily required for correct handling of PIE binaries. At the moment there is no interface similar to info auxv to the one from GDB. Unfortunately at the current stage, this code is still unused by NetBSD. I will return to it once the Native Process Plugin is enhanced. Ive filed the mentioned bug: Ive added new ATF test: Verify PTREADAUXV called for tracee . What has been done in LLDB 1. Resolving executables name with sysctl(7) In the past the way to retrieve a specified process executable path name was using Linux-compatibile feature in procfs ( proc ). The canonical solution on Linux is to resolve path of procPIDexe . Christos Zoulas added in DTrace port enhancements a solution similar to FreeBSD to retrieve this property with sysctl (7). This new approach removes dependency on proc mounted and Linux compatibility functionality. Support for this has been submitted to LLDB and merged upstream: 2. Real-Time Signals The key feature of the POSIX standard with Asynchronous IO is to support Real-Time Signals. One of their use-cases is in debugging facilities. Support for this set of signals was developed during Google Summer of Code 2016 by Charles Cui and reviewed and committed by Christos Zoulas. Ive extended the LLDB capabilities for NetBSD to recognize these signals in the NetBSDSignals class. Support for this has been submitted to LLDB and merged upstream: 3. Conflict removal with system-wide six. py The transition from Python 2.x to 3.x is still ongoing and will take a while. The current deadline support for the 2.x generation has been extended to 2020. One of the ways to keep both generations supported in the same source-code is to use the six. py library (py2 x py3 6.py). It abstracts commonly used constructs to support both language families. The issue for packaging LLDB in NetBSD was to install this tiny library unconditionally to a system-wide location. There were several solutions to this approach: drop Python 2.x support, install six. py into subdirectory, make an installation of six. py conditional. The first solution would turn discussion into flamewar, the second one happened to be too difficult to be properly implemented as the changes were invasive and Python is used in several places of the code-base (tests, bindings. ). The final solution was to introduce a new CMake option LLDBUSESYSTEMSIX - disabled by default to retain the current behavior. To properly implement LLDBUSESYSTEMSIX . I had to dig into installation scripts combined in CMake and Python files. It wasnt helping that Python scripts were reinventing getopt (3) functionality. and I had to alter it in order to introduce a new option --useSystemSix . Support for this has been submitted to LLDB and merged upstream: 4. Do not pass non-POD type variables through variadic function There was a long standing local patch in pkgsrc, added by Tobias Nygren and detected with Clang. According to the C11 standard 5.2.27: Passing a potentially-evaluated argument of class type having a non-trivial copy constructor, a non-trivial move constructor, or a non-trivial destructor, with no corresponding parameter, is conditionally-supported with implementation-defined semantics. A short example to trigger similar warning was presented by Joerg Sonnenberg: This code compiled against libc gives: Support for this has been submitted to LLDB and merged upstream: 5. Add NetBSD support in Host::GetCurrentThreadID Linux has a very specific thread model, where process is mostly equivalent to native thread and POSIX thread - its completely different on other mainstream general-purpose systems. That said fallback support to translate pthreadt on NetBSD to retrieve the native integer identifier was incorrect. The proper NetBSD function to retrieve light-weigth process identification is to call lwpself (2). Support for this has been submitted to LLDB and merged upstream: 6. Synchronize PlatformNetBSD with Linux The old PlatformNetBSD code was based on the FreeBSD version. While the FreeBSD current one is still similar to the one from a year ago, its inappropriate to handle a remote process plugin approach. This forced me to base refreshed code on Linux. After realizing that PlatformPlugin on POSIX platforms suffers from code duplication, Pavel Labath helped out to eliminate common functions shared by other systems. This resulted in a shorter patch synchronizing PlatformNetBSD with Linux, this step opened room for FreeBSD to catch up. Support for this has been submitted to LLDB and merged upstream: 7. Transform ProcessLauncherLinux to ProcessLauncherPosixFork It is UNIX specific that signal handlers are global per application. This introduces issues with wait (2)-like functions called in tracers, as these functions tend to conflict with real-life libraries, notably GUI toolkits (where SIGCHLD events are handled). The current best approach to this limitation is to spawn a forkee and establish a remote connection over the GDB protocol with a debugger frontend. ProcessLauncherLinux was prepared with this design in mind and I have added support for NetBSD. Once FreeBSD will catch up, they might reuse the same code. Support for this has been submitted to LLDB and merged upstream: reviews. llvm. orgD29347 - Add ProcessLauncherNetBSD to spawn a tracee renamed to Transform ProcessLauncherLinux to ProcessLauncherPosixFork committed r293768 8. Document that LaunchProcessPosixSpawn is used on NetBSD Host::GetPosixspawnFlags was built for most POSIX platforms - however only Apple, Linux, FreeBSD and other-GLIBC ones (I assume DebiankFreeBSD to be GLIBC-like) were documented. Ive included NetBSD to this list. Support for this has been submitted to LLDB and merged upstream: Document that LaunchProcessPosixSpawn is used on NetBSD committed r293770 9. Switch std::callonce to llvm::callonce There is a long-standing bug in libstdc on several platforms that std::callonce is broken for cryptic reasons. This motivated me to follow the approach from LLVM and replace it with homegrown fallback implementation llvm::callonce . This change wasnt that simple at first sight as the original LLVM version used different semantics that disallowed straight definition of non - static onceflag . Thanks to cooperation with upstream the proper solution was coined and LLDB now works without known regressions on libstdc out-of-the-box. Support for this has been submitted to LLVM, LLDB and merged upstream: 10. Other enhancements I a had plan to push more code in this milestone besides the mentioned above tasks. Unfortunately not everything was testable at this stage. Among the rescheduled projects: In the NetBSD platform code conflict removal in GetThreadName SetThreadName between pthreadt and lwpidt . It looks like another bite from the Linux thread model. Proper solution to this requires pushing forward the Process Plugin for NetBSD. Host::LaunchProcessPosixSpawn proper setting ::posixspawnattrsetsigdefault on NetBSD - currently untestable. Fix false positives - premature before adding more functions in NetBSD Native Process Plugin. On the other hand Ive fixed a build issue of one test on NetBSD: Plan for the next milestone Ive listed the following goals for the next milestone. mark exect (3) obsolete in libc remove libpthreaddbg (3) from the base distribution add new API in ptrace (2) PTSETSIGMASK and PTGETSIGMASK add new API in ptrace (2) to resume and suspend a specific thread finish switch of the PTWATCHPOINT API in ptrace (2) to PTGETDBREGS amp PTSETDBREGS validate i386, amd64 and Xen proper support of new interfaces upstream to LLDB accessors for debug registers on NetBSDamd64 validate PTSYSCALL and add a functionality to detect and distinguish syscall-entry syscall-exit events validate accessors for general purpose and floating point registers Post mortem FreeBSD is catching up after NetBSD changes, e. g. with the following commit: This move allows to introduce further reduction of code-duplication. There still is a lot of room for improvement. Another benefit for other software distributions, is that they can now appropriately resolve the six. py conflict without local patches. These examples clearly show that streamlining NetBSD code results in improved support for other systems and creates a cleaner environment for introducing new platforms. A pure NetBSD-oriented gain is improvement of system interfaces in terms of quality and functionality, especially since DTraceNetBSD is a quick adopter of new interfaces. and indirectly a sandbox to sort out bugs in ptrace (2). The tasks in the next milestone will turn NetBSDs ptrace (2) to be on par with Linux and FreeBSD, this time with marginal differences. To render it more clearly NetBSD will have more interfaces in readwrite mode than FreeBSD has (and be closer to Linux here), on the other hand not so many properites will be available in a thread specific field under the PTLWPINFO operation that caused suspension of the process. Another difference is that FreeBSD allows to trace only one type of syscall events: on entry or on exit. At the moment this is not needed in existing software, although its on the longterm wishlist in the GDB project for Linux. It turned out that, I was overly optimistic about the feature set in ptrace (2), while the basic ones from the first milestone were enough to implement basic support in LLDB. it would require me adding major work in heuristics as modern tracers no longer want to perform guessing what might happened in the code and what was the source of signal interruption. This was the final motivation to streamline the interfaces for monitoring capabilities and now Im adding remaining interfaces as they are also needed, if not readily in LLDB, there is DTrace and other software that is waiting for them now. Somehow I suspect that I will need them in LLDB sooner than expected. This work was sponsored by The NetBSD Foundation. The NetBSD Foundation is a non-profit organization and welcomes any donations to help us continue to fund projects and services to the open-source community. Please consider visiting the following URL, and chip in what you can: February 09, 2017 We became tired of waiting. File Info: 7Min, 3MB. Ogg Link: archive. orgdownloadbsdtalk266bsdtalk266.ogg February 08, 2017 Background I am using a sparc64 Sun Blade 2500 (silver) as a desktop machine - for my pretty light desktop needs. Besides the usual developer tools (editors, compilers, subversion, hg, git) and admin stuff (all text based) I need mpg123 and mserv for music queues, Gimp for image manipulation and of course Firefox. Recently I updated all my installed pkgs to pkgsrc-current and as usual the new Firefox version failed to build. Fortunately the issues were minor, as they all had been handled upstream for Firefox 52 already, all I needed to do was back-porting a few fixes. This made the pkg build, but after a few minutes of test browsing, it crashed. Not surprisingly this was reproducible, any web site trying to play audio triggered it. A bit surprising though: the same happened on an amd64 machine I tried next. After a bit digging the bug was easy to fix, and upstream already took the fix and committed it to the libcubeb repository. So I am now happily editing this post using Firefox 51 on the Blade 2500. I saw one crash in two days of browsing, but unfortunately could not (yet) reproduce it (I have gdb attached now). There will be future pkg updates certainly. Future Obstacles You may have read elsewhere that Firefox will start to require a working Rust compiler to build. This is a bit unfortunate, as Rust (while academically interesting) is right now not a very good implementation language if you care about portability. The only available compiler requires a working LLVM back end, which we are still debugging. Our auto-builds produce sparc sets with LLVM, but the result is not fully working (due to what we believe being code gen bugs in LLVM). It seems we need to fix this soon (which would be good anyway, independent of the Rust issue). Besides the back end, only very recently traces of sparc64 support popped up in Rust. However, we still have a few firefox versions time to get it all going. I am optimistic. Another upcoming change is that Cairo (currently used as 2D graphics back end, at least on sparc64) will be phased out and Skia will be the only supported software rendering target. Unfortunately Skia does (as of now) not support any big endian machine at all. I am looking for help getting Skia to work on big endian hardware in general, and sparc64 in particular. Alternatives Just in case, I tested a few other browsers and (so far) they all failed: NetSurf Nice, small, has a few tweaks and does not yet support JavaScript good enough for many sites MidoriThey call it lightweight but it is based on WebKit, which alone is a few times more heavy than all of Firefox. It crashes immediately at startup on sparc64 (I am investigating, but with low priority - actually I had to replace the hard disk in my machine to make enough room for the debug object files for WebKit - it takes So, while it is a bit of a struggle to keep a modern browser working on my favorite odd-ball architecture, it seems we will get at least to the Firefox 52 ESR release, and that should give us enough time to get Rust working and hopefully continue with Firefox. February 07, 2017 So finally Ive moved all services from my old server to my Christmas Xen box. This was not without problems due to the fact it had to run NetBSD - current gcc toolchain is broken for some packages which affected running any PHP build clang toolchain was broken for my config (USESSP yes and . February 04, 2017 Note the end this week of pc98, the most focused of niche platforms. January 31, 2017 What has been done in NetBSD What has been done in LLDB Plan for the next milestone Accidental theme this week: books. What are the techniques generally people follow to dump full core dump if the size of core dump is more than the RAM and flash. Say, kernel core is of 2GB size but we have exactly 2GB of RAM and 1GB of disk space. I am aware external USB and tftp options. But, reliability and stability matters when we choose these options. How do embedded people handle these type of issues and what are the techniques available Platform: NetBSD, ARM7 January 18, 2017 Previously This is the sixth in a series of Nifty and Minimally Invasive qmail Tricks, following Losing services (and eventually restoring them) When my Mac mini s hard drive died in the Great Crash of Fall 2008. taking this website and my email offline with it, I was already going through a rough time, and my mental bandwidth was extremely limited. I expended some of it explaining to friends what they could do about their hosted domains until such time as my brain became available again (as I assumed andor hoped it eventually would). I expended a bit more asking a friend to do a small thing to keep my email flowing somewhere I could get it. And then I was spent. The years where I used Gmail and had no website felt like years in the wilderness. That feeling could mostly have been about how I missed the habit of reflecting about my life now and again, writing about it, and sharing. But when the website returned four years ago (in order to remember Aaron Swartz ), the feeling didnt go away. All I got was a small sense of relief that my writings and recordings were available and that I could safely revive my old habit. After a year and half of reflecting, writing, and sharing, the feels-needle hadnt rebounded much further. It was only after painstakingly restoring all my old email (from Mail. apps cache, using emlx2maildir ), moving it up to my IMAP server, carefully merging six years worth of Gmail into that, accepting SMTP deliveries for schmonz. and not needing Gmail at all for several weeks that I noticed my long, strange sojourn had ended. Hypothetically speaking If it so happened that Id instead fixed email first, Id also have felt a tiny bit weird till my website was back. But only a tiny bit. When my web servers down, you might not hear from me when my mail servers down, I cant hear from you or, as happened in 2008, from my professors during finals week. So while web hosting can be interesting. mail hosting keeps me attached to what it feels like to be responsible for a production service. Keeping it real I value this firsthand understanding very, very highly. I started as a sysadmin, Im often still a developer, and thats part of why Im sometimes helpful to others. But since Im always in danger of forgetting lessons I learned by doing it, Im always in danger of being harmful when I try to help others do it . As a coach, one of my meta-jobs is to remind myself what it takes to know the risks, decide to ship it, live with the consequences, tighten the shipping-it loop until its tight enough, and notice when that stops being true. And thats why I run my own mail server. Whats new this week My 2014 mail server was configured just about identically with my 2008 one, for which it was handy to consult the earlier articles in this series . Then, recently, my weekly build broke on the software Ive been using to send mail. It was a trivial breakage, easy to fix, but it reminded me about a non-trivial future risk that I didnt want hanging over my head anymore. (For more details, see my previous post .) Now Im sending mail another way. Clients are unchanged, the server no longer needs TMDA or its dependencies, and I no longer have a specific expectation for how this aspect of my mail service will certainly break in the future. (Just some vague guesses, like a newly discovered compromise in the TLS protocol or OpenSSLs implementation thereof, or STARTTLS or Stunnel s implementation thereof.) A couple iterations First, I tried the smallest change that might work: Replacing tmda-ofmipd with the original ofmipd from mess822 (by the author of qmail. the software around which my mail service is built), Wrapped in SMTP AUTH by spamdyke (new use of an existing tool), Wrapped in STARTTLS by stunnel (as before). It worked TMDA no longer needed. I committed an update to my qmail-run package with a new shell script to manage this ofmipd service. uninstalled TMDA, and removed its configuration files. Next, I tried a change that might shorten the chain of executables : It worked Second instance of spamdyke no longer needed. To start a mail submission service on localhost port 26, these are the lines I added to etcrc. conf : To make the service available on the network, this is the config from etcstunnelstunnel. conf : (It already had this stanza, but with 8025 where tmda-ofmipd was listening. I simply changed the port number and restarted stunnel .) Im still relying on spamdyke for other purposes, but Im comfortable with those. Im still relying on stunnel for STARTTLS, but Im relatively comfortable keeping OpenSSL contained in its own address space and user account. Refactoring for mail hosting The present configuration is a refactoring. no externally visible change to email clients, yes internally visible change to email administrator (moi). I believe this refactoring was one of the best kind, able to be performed safely and reducing the risk I was worried about. The current configuration is much more likely to meet my future need to not have a production outage that interrupts my work for arbitrary duration while I scramble to understand and fix it. I dont have any more cheap ideas for lowering my risk, and it feels low enough anyway. So Im comfortable that this is the right place to stop . Conclusion Want to learn to see the consequences of your choices andor help other people do the same Consider productionizing something important to you. January 14, 2017 Im trying to compile a program with clang and libc on NetBSD. Clang version is 3.9.0, and NetBSD version is 7.0.2. The compile is failing with: ltcstddefgt is present, but it appears to be GCCs: If I am parsing Index of pubNetBSDNetBSD-release-7srcexternalbsdlibc correctly, the library is available. When I attempt to install libc or libcxx : Is Clang with libc a supported configuration on NetBSD How do we use Clang and libc on NetBSD January 11, 2017 Ill install netbsd on an old computer, but I am sure Ill have a hard time to get wireless internet working in a way or another. I figured I could do that easily if I managed to install things for this computer, on another one, the one I am using now, by crosscompiling. And that it would be a good training, isnt it For now, if pkgadd and so on are recognized, I still cant pkgadd pkgin or any software: it says it doesnt know that package. How come. I see it, its there. Takk. Heres my PATH variable: PATHusrpkgsbin:usrpkgbin:usrlocalbin:usrbin:bin:usrlocalgames:usrgames ps:some might remember me. Indeed, I failed using this system many time, but I am a romantic, and I cant stop feeling something in my heart anytime I read pkgsrc or netbsd, I just dont know why. so here I am again :D January 09, 2017 NetBSDs scheduler was recently changed to better distribute load of long-running processes on multiple CPUs. So far, the associated sysctl tweaks were not documented, and this was changed now, documenting the kern. sched sysctls. For reference, here is the text that was added to the sysctl(7) manpage. Well, subject says it all. To quote from Soren Jacobsens email. The first release candidate of NetBSD 7.1 is now available for download at: Those of you who prefer to build from source can continue to follow the netbsd-7 branch or use the netbsd-7-1-RC1 tag. There have been quite a lot of changes since 7.0. See srcdocCHANGES-7.1 for the full list. Please help us out by testing 7.1RC1. We love any and all feedback. Report problems through the usual channels (submit a PR or write to the appropriate list). More general feedback is welcome at email160protected Ive installed NetBSD 7.0.1 in a KVM virtual machine under libvirt on a Fedora 25 Linux host. I want to use spice. so i specified the requisite qxl graphic in the virtual machine then installed xf86-video-qxl-0.1.4nb1 with pkgin in the NetBSD guest. But both varlogxdm. log and varlogXorg.0.log complained that they couldnt find the qxl module. Then I realized they were looking in usrX11R7libmodules but the qxl package put it in usrpkglibxorgmodules. To solve that, I manually added a symbolic link. And indeed, that solved the not found problem. (But why the two directories. ) Now they complain that its the wrong driver. Both xdm. log and Xorg.0.log gripe: (EE) module ABI major version (20) doesnt match the servers version (10) (EE) Failed to load module qxl (module requirement mismatch, 0) Why are things out of sync in the NetBSD code base How can anyone get X to work What can I do to solve this January 08, 2017 im trying to install nzbget. i think it was in the pkgsrc way back but its not there anymore. so i tried this: (1) i downloaded the source from nzbget website (2) then. configure said A compiler with support for C14 language features is required.. so i installed gcc6 using pkgin in gcc6 (3) so then i tried PATHusrpkggcc6bin:PATH. configure and it said compiler is ok, but now i got configure: error: ncurses library not found (4) i have ncurses lib in usrpkgincludencurses, how to let. configure know the location of ncurses lib Is it normal that when I use Zlib from Pkgsrc or base as reference via include bl3 for a project (like the current supertuxkart version 0.9.2) that within. buildlinkinclude directory no symlinks exist of zlib. h and zconf. h I newer saw this behaviour before and it breaks the compilation. January 05, 2017 Last night, mere moments from letting me commit a new package of Test::Continuous (continuous testing for Perl), my computer acted as though it knew its replacement was on the way and didnt care to meet it. This tiny mid-2013 11 MacBook Air made it relatively ergonomic to work from planes, buses, and anywhere else when I lived in New York and flew regularly to see someone important in Indiana, and continued to serve me well when that changed and changed again . The next thing I was planning to do with it was write this post. Instead I rebooted into DiskWarrior and crossed my fingers. Things get in your way, or threaten to. Thats life. But when you have slack time. you can Cope better when stuff happens, Invest in reducing obstacles, and Feel more prepared for the next time stuff happens. Having enough slack is as virtuous a cycle as insufficient slack is a vicious one. Paying down non-tech debts Last year I decided to spend more time and energy improving my health. Having recently spent a few weeks deliberately not paying attention to any of that, Im quite sure that I prefer paying attention to it, and am once again doing so. Learning to make my health a priority required that I make other things non-priorities, notably Agile in 3 Minutes. It no longer requires that. Ive recently invested in making the site easier for me to publish, and you may notice that its easier for you to browse. I didnt have enough slack to do these things when I was writing and recording a new episode every week. Now that enough of them have been taken care of, I feel prepared to take new steps with the podcast. And tech debts Earlier this week I noticed a broken link in a comment on Refactorings for web hosting. so I took a moment to check for other broken links on this site (ikiwiki makes it easy ). Before that, I inspected and minimized the differences between dev (my laptop) and prod (my server, where youre reading this), updated prod with the latest ikiwiki settings, and (because its all in Git) rebased dev from prod. In so doing, I observed that more config differences could be easily harmonized by adjusting some server paths to match those on my laptop. (When Apple introduced System Integrity Protection. pkgsrc on Mac OS X could no longer install under usr. and moved to opt. With my automated NetBSD package build. I can easily build the next batch for optpkg as well, retaining usrpkg as a symlink for a while. So I have.) Ive been running lots of these builds in the past week anyway, because a family of packages I maintain in pkgsrc had been outdated for quite a while and I finally got around to catching them up to upstream. Once they built on OS X, I committed the updates to the cross-platform package system. only to notice that at least one of them didnt build on NetBSD. So I fixed it, ran another build, saw what else I broke, and repeated until green. And taking on patience debt telling you about more of this crud Due to another update that temporarily broke the build of TMDA. I was freshly reminded that thats a relatively biggish liability in my server setup. I use TMDA to send mail. which is not mainly what its for, and I never got around to using it for what its for (protecting against spam with automated challenge-response), and it hasnt been maintained for years, and is stuck needing an old version of Python. On the plus side, running a weekly build means that when TMDA breaks more permanently, Ill notice pretty quickly. On the minus side, when that happens, Ill feel pressure to fix or replace it so I can (1) continue to send email like a normal person and (2) restart the weekly build like a me-person. If I can reduce the liability now, maybe I can avoid feeling that pressure later. Investigating alternatives, I remembered that Spamdyke. which I already use for delaying the SMTP greeting. blacklisting from a DNSBL as well as To: addresses that only get spam anymore, and greylisting from unknown senders, can provide SMTP AUTH. So Ill try keeping stunnel and replacing tmda-ofmipd with a second instance of spamdyke. If thats good, Ill remove mailtmda from the list of packages I build every week. then build spamdyke with OpenSSL support and try letting it handle the TLS encryption directly. If thats good, Ill remove securitystunnel from the list of packages too, leaving me at the mercy of fewer pieces of software breaking. Leaning more heavily on Spamdyke isnt a clear net reduction of risk. When a bad bug is found, itll impact several aspects of my mail service. And if and when NetBSD moves from GCC to Clang, Ill have to add langgcc to my list of packages and instruct pkgsrc to use it when building Spamdyke, or else come up with a patch to remove Spamdykes use of anonymous inner functions in C. (That could be fun. I recently started learning C .) I could go on, but Im a nice person who cares about you. Thats enough of that. So what All these builds pushing my soon-to-be-replaced laptop through its final paces as a development machine might have had something to do with triggering its misbehavior last night. And all this work seems like, well, a lot of work. Is there some way I could do less of it Yes, of course. But given my interests and goals, it might not be a clear net improvement. For instance, when Tim Ottinger drew my attention to that Test::Continuous Perl module, being a pkgsrc developer gave me an easy way to uninstall it if I wound up not liking it, which meant it was easy to try, which meant I tried it. I want conditions in my life to favor trying things. So Im invested in preserving and extending those conditions. In Gary Bernhardt s formulation, Im aiming to maximize the area under the curve . No new resolutions, yes new resolvings Im not looking to add new goals for myself for 2017. Im not even trying to make existing things good enough there are too many things, and as a recovering perfectionist I have trouble setting a reasonable bar Im just trying to make them good enough enough that I can expect small slices of time and attention to permit small improvements . Jessica Kerr has a thoughtful side blog named True in software, true in life. Heres something thatd qualify: When conditions are expected to change, smaller batch size helps us adjust. Reducing batch size takes time and effort. Paying down my self-debts (technical and otherwise) feels like resolving . I have, at times, felt quite out of position at managing myself. Lately Im feeling much more in position, and much more like I can expect to continue to make small improvements to my positioning. When you want the option to change your bodys direction, you take smaller steps, lower your center, concentrate on balance. Thats Agile. Moi My current best understanding is that a balanced life is a small-batch-size life. If thats the case, Im getting there. Further repositioning This coming Monday, Ill be switching to one of these weird new MacBook Pros with the row of non-clicky touchscreen keys. If my current computer survives till then, thatll be one smooth step in a series of transitions. (In other news, Bekki defends her dissertation that day.) The following Monday, Ill be starting my next project, a mostly-remote gig pairing in Python to deliver software for a client while encouraging and supporting growth in my Pillar teammates. Ill be in Des Moines every so often if youre there andor have recommendations for me, Id love to hear from you. The Monday after that, well pack up a few things the movers havent already taken away, and our time in Indiana will come to an end. Were headed back to the New York area to live near family and friends. No resolutions, yes intentions For 2017, I declare my intentions to: Continue to improve my health and otherwise attend to my own needs Help more people understand what software development work is like Help more people feel heard I hope to see and hear you along the way. January 04, 2017 So over the holidays, I managed to get in some good quality family time and find some time to work on some Open Source stuff. I meant to work mainly on dhcpcd. but it turned out I spent most of my time working on NetBSD curses library so that Python Curses now works with it. Now, most people r. Adding and removing hardware components in operation is common in todays commoditized computing environments. This was not always the case - in the past century, one had to power down a machine in order to change network cards, harddisks or RAM. A major step towards changing a systems configuration at runtime for customers came with USB, but thats not where it ends - other systems like PCI support hotplugging as well. Another area where changing of the systems configuration is the amount of Ramdom Access Memory (RAM) of a system. Usually fixed, this is determined at system start time, and then managed by the operating systems memory managent system. But esp. with todays virtualized hardware systems, even the amount of RAM assigned to a system can easily be changed. For example a VM can be assigned more RAM when needed, without even rebooting the system, leading to increased system performance without introducing swappingpaging overhead. Of course this required support from the operating system and its memory management subsystem. For NetBSD, the UVM virtual memory system was now changed to support this via the uvmhotplug(9) API, and a first user for this is the Xen balloon(4) driver. Quoting from the balloon(4) manpage. The balloon driver supports the memory ballooning operations offered in Xen environments. It allows shrinking or extending a domains available memory by passing pages between different domains. The uvmhotplug(9) manpage gives us more information on the UVM hotplug functionality: When the kernel is compiled with options UVMHOTPLUG, memory segments are handled in a dynamic data structure (rbtree(3)) com - pared to a static array when not. This enables kernel code to add or remove information about memory segments at any point after boot - thus hotplug. To answer more questions for portmasters who want to change their ports, Cherry G. Mathew has now posted a uvmhotplug(9) port masters FAQ. It covers questions on the background, affected files, and needed changes. For more information on UVM, see Charles Chuck Cranors PhD disertation on Design and Implementation of UVM (PDF) as well as his Usenix talk on the UVM Virtual Memory System (PS). There is also plenty of information available on Xen ballooning - check it out and share your experiences on NetBSDs port-xen mailing list December 29, 2016 My brother got me some very tasty presents for Christmas (and my up-coming Birthday) . namely the GIGABYTE BRIX J1900 and a Samsung EVO 750 250G. Santa also brought me 8G of Crucial memory. Putting them all together is a nice new machine to install NetBSD Xen. The key part is this is a low. December 22, 2016 After my last blog postings on the NetBSD scheduler. some time went by. What has happened that the code to handle process migration was rewritten to give more knobs for tuning, and some testing was done. The initial problem state in PR kern51615 is solved by the code. To reach a wider audience and get more testing, the code was committed to NetBSD-current today. Now, two things remain to be seen: More testing . This best involved situations that compare the systems behaviour without and with the patch. Situations to test include pure computation jobs that involve multiple parallel processes a mix of CPU-crunching and inputoutput, again on a number of concurrent processes full build. sh examples If you have time and an interesting set of numbers, please feel free to let us know on tech-kern.. Documentation . There is already a number of undocumented sysctls under kern. sched, which was now extended by one more, averageweight. While its obvious to add the knob from the formula, testing it under various real-life conditions and see how things change is left to be determined by a PhD thesis or two - be sure to drop us your patches for srcsharemanman7sysctl.7 if you can come up with a comprehensible description of all the scheduler sysctls So just now when you thought there is no more research to be done in scheduling algorithms, here is your chance to fame and glory. -)December 17, 2016 How can I activate Keyboard Latin American on NetBSD Because when I am installing I never saw the Latin American keyboard, only Spanish. December 09, 2016 Where can I find and install an AR9271 driver for the latest NetBSD The target machine does not have Internet access and I need to setup the WiFi dongle first. UPDATE . wpasupplicant was already written, but I didnt see my device. When I plug in the dongle its shown as: ifconfig shows only re0 and lo0 interfaces. UPDATE . I saw on some Linux forums that the dongle uses an Atheros chip, but I checked in Windows and see Ralink. The ral driver is also integrated in NetBSD, but the situation doesnt change - I see no ra device in dmesg. boot. December 08, 2016 So, Ive installed NetBSD 7 and device shown again as ugen (ugein, lol). Then Im installed FreeBSD 10.2 and ugen again. usbconfig gives me ugen4.3: ltproduct 0x7601 vendor 0x148fgt at usbus4, cfg0 mdHOST spdHIGH(480Mbps) pwrON (90ma) So, whats next Buying new dongle is a last thing, which Ill make. UPD: NDIS driver not works. December 07, 2016 At Agile Testing Days. I facilitated a workshop called DevOps Dojo. We role-played Dev and Ops developing and operating a production system, then figured out how to do it better together. Youre welcome to use the workshop materials for any purpose, including your own workshop. If you do, Id love to hear about it. Some firsts Ive spoken at several instances of pkgsrcCon (including twice in nearby Berlin ), but thats more like a hackathon with some talks. Agile Testing Days was a proper conference . with hundreds of people and plenty of conferring. If someone asks whether Im an international speaker, or claims I am one, I now wont feel terribly uncomfortable going along with it. What I expected from many previous Lean Coffees: Id have to control myself to not say all the ideas and suggestions that come to mind. What happened at this Lean Coffee: It was very easy to listen, because I didnt have many ideas or suggestions, because the topics came from people who were mostly testers. Conclusions I immediately drew: Come to think of it, I have not played every role on a team. I dont know what its like to be a tester. Maybe my guesses about what its like are less wrong than some others, but theyre still gonna be wrong. This is evidently my first conference thats more testing than Agile . Cool I bet I can learn a lot here. Thanks to Troy Magennis. Markus Grtner. and Cat Swetel. I decided to try a new idea and spend a few slides drawing attention to the existence and purpose of Agile Testing Days Code of Conduct. I cant tell yet how much good this did, but it took so little time that Ill keep trying it in future conference presentations and workshops. Some nexts My next gig will be remote coaching, centered around what we notice as were pair programming and delivering working software. Ive done plenty of coaching and plenty of remote work. but not usually at the same time. Thanks to Lean Coffee with folks like Janet and Alex Schladebeck. I got some good advice on being a more effective influencer when it takes more intention and effort to have face-to-face interactions. Alex: For a personal connection, start meetings by unloading your baggage whatevers on your mind today that might be dividing your attention and inviting others to unload theirs. (Ideally, establish this practice in person first.) Janet: Ask questions that help people recognize their own situation. (Helping people orient themselves in their problem spaces is one of my go-to strengths. Im ready to be leaning harder on it.) As I learn about remote coaching, I expect to write things down at Shape My Work. a wiki about distributed Agile that Alex Harms and I created. Youll notice it has a Code of Conduct. If it makes good sense to you, wed love to learn what youve learned as a remote Agilist. I found Agile Testing Days to be a lovingly organized and carefully tuned mix of coffee breaks, efficiency, flexibility, and whimsy. The love and whimsy shone through. Im honored to have been part of it, and I sure as heck hope to be back next year. Wed be back next year anyway we visit family in Germany every December. Someday we might choose to live near them for a while. It occurs to me that having participated in Agile Testing Days might well have been an early investment in that option, and the thought pleases me. (As does the thought of hopping on a train to participate again.) Im in Europe through Christmas. I consult, coach. and train. Do you know of anyone who could use a day or three of my services One aspect of being a tester I do identify with is being frequently challenged to explain their discipline or justify their decisions to people who dont know what the work is like (and might not recognize the impact of their not knowing). In that regard, I wonder how helpful Agile in 3 Minutes is for testers. Lets say I could be so lucky as to have a few guest episodes about testing. Who would be the first few people youd want to hear from Who has a way with words and ideas, knows the work, and can speak to it in their unique voice to help the rest of us understand a bit better December 01, 2016 November 24, 2016 Interesting news come in via slashdot: Apple Releases macOS 10.12 Sierra Open Source Darwin Code. Apple has released the open source Darwin code for macOS 10.12 Sierra. The code, located on Apples open source website, can be accessed via direct link now, although it doesnt yet appear on the sites home page. The release builds on a long-standing library of open source code that dates all the way back to OS X 10.0. There, youll also find the Open Source Reference Library, developer tools, along with iOS and OS X Server resources. The lowest layers of macOS, including the kernel, BSD portions, and drivers are based mainly on open source technologies, collectively called Darwin. As such, Apple provides download links to the latest versions of these technologies for the open source community to learn and to use. This may not only be of interest to the OpenDarwin folks (or rather their successors in PureDarwin ) but more investigation not only on the code itself, but also the license it is released under is neccessary to learn if anything can be gained back for NetBSD. Why back As you may or may not remember, mac OS includes some parts of NetBSD (besides lots of FreeBSD, probably some OpenBSD, much other Open Source software and sure a big lot of Apples own code). My first job was in Operations. When I got to be a Developer, I promised myself Id remember how to be good to Ops. Ive sometimes succeeded. And when Ive been effective, its been in part due to my firsthand knowledge of both roles. DevOps is two things (hint: theyre not Dev and Ops) Part of what people mean when they say DevOps is automation. Once a system or service is in operation, it becomes more important to engineer its tendencies toward staying in operation. Applying disciplines from software development can help. These words are brought to you by a Unix server I operate. I rely on it to serve this website, those of a few friends, and a tiny podcast of some repute. Oh yeah, and my email. It has become rather important to me that these services tend to stay operational. One way I improve my chances is to simplify whats already there . If it hurts, do it more often Another way is to update my installed third-party software once a week. This introduces two pleasant tendencies: its much Less likely, at any given time, that Im running something dangerously outdated More likely, when an urgent fix is needed, that Ill have my wits about me to do it right Updating software every week also makes two strong assumptions about safety (see Modern Agiles Make Safety a Prerequisite): that I can quickly and easily Roll back to the previous versions Build and install new versions Since Ive been leaning hard on these assumptions, Ive invested in making them more true. The initial investment was to figure out how to configure pkgsrc to build a complete set of binary packages that could be installed at the same time as another complete set. My hypothesis was that then, with predictable and few side effects, I could select the active software set by moving a symbolic link . It worked. On my PowerPC Mac mini. the best-case upgrade scenario went from half an hours downtime (bring down services, uninstall old packages, install new packages, bring up services) to less than a minute (install new packages, bring down services, move symlink, bring up services, delete old packages after a while). The worst case went from over an hour to maybe a couple of minutes. Until it hurts enough less I liked the payoff on that investment a lot . Ive been adding incremental enhancements ever since. I used to do builds directly on the server: in place for low-risk leaf packages, as a separate full batch otherwise. It was straightforward to do, and I was happy to accept an occasional reduction in responsiveness in exchange for the results. After the Mac mini died. I moved to a hosted Virtual Private Server that was much easier to mimic. So I took the job offline to a local VirtualBox running the same release and architecture of NetBSD (32-bit i386 to begin with, 64-bit amd64 now, both under Xen ). The local job ran faster by some hours (I forget how many), during which the server continued devoting all its IO and CPU bandwidth to its full-time responsibilities. Last time I went and improved something was to fully automate the building and uploading, leaving myself a documented sequence of manual installation steps. Yesterday I extended that shell script to generate another shell script thats uploaded along with the packages. When the uploads done, theres one manual step: run the install script. If you can read these words, it works. DevOps is still two things Applying Dev concepts to the Ops domain is one aspect. When Im acting alone as both Dev and Ops, as in the above example, Ive demonstrated only that one aspect. The other, bigger half is collaboration across disciplines and roles. I find it takes some not-tremendously-useful effort to distinguish this aspect of DevOps from BDD or from anything else that looks like healthy cross-functional teamwork. Its the healthy cross-functional teamwork Im after. There are lots of places to start having more of that. If your teams context suggests to you that DevOps would be a fine place to start, go after it Find ways for Dev and Ops to be learning together and delivering together. Thats the whole deal. Heres another deal Two weeks from today, at Agile Testing Days in Potsdam, Germany, Im running a hands-on DevOps collaboration workshop. Can you join us Its not too late, and you can save 10 off the price of the conference ticket. Just provide my discount code when you register. Id love to see you there. November 22, 2016 According to NetBSDs wiki I can use pkgadd - uu to upgrade packages. However, when I attempt to use pkgadd - uu it results in an error. Ive tried to parse the pkgadd man page but I cant tell what the command it to update everything. I cant use pkgchk because its not installed, and I cant get the package system to install it: What is the secret command to get the OS to update everything Please forgive my ignorance with this question. I only have NetBSD systems for testing software. It gets used a few times a year, and I dont know much about it otherwise. October 27, 2016 A LAN has been set up with IPSubnet mask 192.48.1.0255.255.255.224 What is the maximum number of machines that can be set up in this LAN and why (This comes under class C network so the maximum would be 255 or less - correct me if im wrong) Suresh - email160protected sends a mail to my friend Rahul - email160protected with these three files as separate attachments as below - march-reports. ppt - Powerpoint file of size 256 KB. - locations. rar - Rar archive file of size 460 KB - me-snap. tiff - Tiff picture file of size 2970 KB. a) What is the size of the outgoing mail including mail headers b) What is the outgoing mail size if all the three files are archived as one single. rar file and sent out as one single attachment c) Show the MIME based mail structure of the outgoing mail. Show the NetBSD based C code for sending a text message Hello. This works to a remote server running on IP 122.250.110.14 on port 5050 and getting back an acknowlegement. October 10, 2016 The FreeBSD Release Engineering Team is pleased to announce the availability of FreeBSD 11.0-RELEASE. This is the first release of the stable11 branch. Some of the highlights: OpenSSH DSA key generation has been disabled by default. It is important to update OpenSSH keys prior to upgrading. Additionally, Protocol 1 support has been removed. OpenSSH has been updated to 7.2p2. Wireless support for 802.11n has been added. By default, the ifconfig(8) utility will set the default regulatory domain to FCC on wireless interfaces. As a result, newly created wireless interfaces with default settings will have less chance to violate country-specific regulations. The svnlite(1) utility has been updated to version 1.9.4. The libblacklist(3) library and applications have been ported from the NetBSD Project. Support for the AArch64 (arm64) architecture has been added. Native graphics support has been added to the bhyve(8) hypervisor. Broader wireless network driver support has been added. The release notes provide the in-depth look at the new release, and you can get it from the download page. September 14, 2016 Many programming guides recommend to begin scripts with the usrbinenv shebang in order to to automatically locate the necessary interpreter. For example, for a Python script you would use usrbinenv python. and then the saying goes, the script would just work on any machine with Python installed. The reason for this recommendation is that usrbinenv python will search the PATH for a program called python and execute the first one found and that usually works fine on ones own machine . Unfortunately, this advice is plagued with problems and assuming it will work is wishful thinking. Let me elaborate. Ill use Python below for illustration purposes but the following applies equally to any other interpreted language. i) The first problem is that using usrbinenv lets you find an interpreter but not necessarily the correct interpreter . In our example above, we told the system to look for an interpreter called python but we did not say anything about the compatible versions. Did you want Python 2.x or 3.x Or maybe exactly 2.7 Or at least 3.2 You cant tell right So the the computer cant tell either regardless, the script will probably run with whichever version happens to be called python which could be any thanks to the alternatives system. The danger is that, if the version is mismatched, the script will fail and the failure can manifest itself at a much later stage (e. g. a syntax error in an infrequent code path) under obscure circumstances. ii) The second problem, assuming you ignore the version problem above because your script is compatible with all possible versions (hah), is that you may pick up an interpreter that does not have all prerequisite dependencies installed . Say your script decides to import a bunch of third-party modules: where are those modules located Typically, the modules exist in a centralized repository that is specific to the interpreter installation (e. g. a. libpython2.7site-packages directory that lives alongside the interpreter binary). So maybe your program found a Python 2.7 under usrlocalbin but in reality you needed it to find the one in usrbin because thats where all your Python modules are. If that happens, youll receive an obscure error that doesnt properly describe the exact cause of the problem you got. iii) The third problem, assuming your script is portable to all versions (hah again) and that you dont need any modules (really), is that you are assuming that the interpreter is available via a specific name . Unfortunately, the name of the interpreter can vary. For example: pkgsrc installs all python binaries with explicitly-versioned names (e. g. python2.7 and python3.0 ) to avoid ambiguity, and no python symlink is created by default which means your script wont run at all even when Python is seemingly installed. iv) The fourth problem is that you cannot pass flags to the interpreter . The shebang line is intended to contain the name of the interpreter plus a single argument to it. Using usrbinenv as the interpreter name consumes the first slot and the name of the interpreter consumes the second, so there is no room to pass additional flags to the program. What happens with the rest of the arguments is platform-dependent: they may be all passed as a single string to env or they may be tokenized as individual arguments. This is not a huge deal though: one argument for flags is too restricted anyway and you can usually set up the interpreter later from within the script. v) The fifth and worst problem is that your script is at the mercy of the users environment configuration . If the user has a misconfigured PATH. your script will mysteriously fail at run time in ways that you cannot expect and in ways that may be very difficult to troubleshoot later on. I quote misconfigured because the problem here is very subtle. For example: I do have a shell configuration that I carry across many different machines and various operating systems such configuration has complex logic to determine a sane PATH regardless of the system Im in but this, in turn, means that the PATH can end up containing more than one version of the same program. This is fine for interactive shell use, but its not OK for any program to assume that my PATH will match their expectations. vi) The sixth and last problem is that a script prefixed with usrbinenv is not suitable to being installed . This is justified by all the other points illustrated above: once a program is installed on the system, it must behave deterministically no matter how it is invoked. More importantly, when you install a program, you do so under a set of assumptions gathered by a configure - like script or prespecified by a package manager. To ensure things work, the installed script must see the exact same environment that was specified at installation time. In particular, the script must point at the correct interpreter version and at the interpreter that has access to all package dependencies. So what to do All this considered, you may still use usrbinenv for the convenience of your own throwaway scripts (those that dont leave your machine) and also for documentation purposes and as a placeholder for a better default . For anything else, here are some possible alternatives to using this harmful shebang: Patch up the scripts during the build of your software to point to the specific chosen interpreter based on a setting the user provided at configure time or one that you detected automatically. Yes, this means you need make or similar for a simple script, but these are the realities of the environment theyll run under Rely on the packaging system do the patching, which is pretty much what pkgsrc does automatically (and I suppose pretty much any other packaging system out there). Just dont assume that the magic usrbinenv foo is sufficient or even correct for the final installed program. Bonus chatter: There is a myth that the original shebang prefix was so that the kernel could look for it as a 32-bit magic cookie at the beginning of an executable file. I actually believed this myth for a long time until today, as a couple of readers pointed me at The magic, details about the shebanghash-bang mechanism on various Unix flavours with interesting background that contradicts this. August 24, 2016 Im running NetBSD in a virtual machine. Documentation and explanations on how to use pkgsrc are scarce. Lets say I want to install vim for NetBSD. What would I type Do I need a URL Do I need a specific version Do I need to set up a directory for building the source of vim July 08, 2016 Here are some notes on installing and running NetBSDevbarm on the AllWinner A20 powered CubieBoard2. I bought this board a few weeks ago for its SATA capabilities, despite the fact that there are now cheaper boards with more powerful CPUs. Required steps for creating a bootable micro SD card are detailed on the NetBSD Wiki. and a NetBSD installation is required to run mkubootimage . I used an USB to TTL serial cable to connect to the board and create user accounts. Do not be afraid of serial, as it has in fact only advantages: there is no need to connect an USB keyboard nor an HDMI display, and it also brings back nice memories. Connecting using cu (from my OpenBSD machine) : Device name might be different when using cu on other operating systems. Adding a regular user in the wheel group : Adding a password to the newly created user and changing default shell to ksh : Installing and configuring pkgin : Finally, here is a dmesg for reference purposes : June 30, 2016 Ive been itching to go wireless on my office desk for sometime. The final wires to eradicate are from my Mac into a USB hub connected to two hard discs for backups. Years ago I had an Apple Time Capsule. The Time Capsule is an Airport Wi-Fi basestation with a hard disc for Macs to back up to using the Time Machine backup software. It was pretty solid kit for a couple of years. Under the hood, it runs NetBSD and as an aside, I have had a few beers with the guy who ported the operating system. The power supply decided to give up a very common fault apparently. I will clean the cables up. I promise. When I was on my travels and living in two places, I had hard discs in both locations. The Mac supports multiple discs for backups and I encrypted the backups in case the discs were stolen. But now Im in one home, I want to be able to move around the house with the Mac but still backup without having to go to the office. We are a two Mac house, so we need something more convenient. I already have a base station and I dont really want to shell out loads of money for an Apple one. There are several options to setup a Time Capsule equivalent. If you have a spare Mac, get a copy of Mac OS X Server. It will support Time Machine backups for multiple Macs and also supports quotas so that the size of the backups can be controlled. I dont have a spare stationary Mac. Anything that speaks Appletalk file sharing protocol reasonably well. Enter the Raspberry Pi. I have a Raspberry Pi 3 and within minutes one can install the Netatalk software. This has been available for years on Linux and implements the Apple file sharing protocols really well. With an external drive added, I was able to get a Time Machine backup working using this article . I could not use my existing backup drive as is. Linux will read and write Mac OS drives, but there is a bit of too-ing and fro-ing so it is best to start with a fresh native Linux filesystem. Even if you can get it to work with the Mac OS drive, it will not be able to use a Time Machine backup from a drive previously directly connected. Ive been using this setup for the last couple of weeks. I have not had to do a serious restore yet and I should caveat that I still have a hard drive I use directly into the machine just in case. The first rule of backups a file doesnt exist unless there are three copies on different physical media. (The Raspberry Pi is setup to be MiniDLNA server. It will stream media to Xboxs and other media players.) June 12, 2016 I installed sudo on NetBSD 7.0 using pkg. I copied usrpkgetcsudoers to etcsudoers because the docs say etcsudoers and possibly etcsudoers. local is used. I uncommented the line wheel ALL(ALL) ALL. I then added myself to the wheel group. I verified I am in wheel with groups. I then logged off and then back on. When I attempt to run sudo ltcommandgt. I get the standard: What is wrong with my sudo installation, and how can I fix it May 31, 2016 A brief description of playing around with SunOS 4.1.4, which was the last version of SunOS to be based on BSD. File Info: 17Min, 8Mb Ogg Link: archive. orgdownloadbsdtalk265bsdtalk265.ogg April 30, 2016 Playing around with the gopher protocol. Description of gopher from the 1995 book Students Guide to the Internet by David Clark. Also, at the end of the episode is audio from an interview with Mark McCahilll and Farhad Anklesaria that can be found at youtubewatchvoR76UI7aTvs Check out gopher. floodgapgopher File Info: 27 Min, 13 MB. Ogg Link:archive. orgdownloadbsdtalk264bsdtalk264.ogg March 23, 2016 This episode is brought to you by ftp, the Internet file transfer program, which first appeared in 4.2BSD. An interview with the hosts of the Garbage Podcast, joshua stein and Brandon Mercer. You can find their podcast at garbage. fm File Info: 17Min, 8MB. Ogg Link: archive. orgdownloadbsdtalk263bsdtalk263.ogg via these fine people and places: This planet is operated by Kimmo Suominen. Hosting provided by Global Wire Oy .

No comments:

Post a Comment