found the answer
function doOptions() {
// Le simple fait de retourner un TextOutput permet à Google Apps Script
// d'ajouter les en-têtes Access-Control nécessaires si le déploiement est réglé sur "Tout le monde".
return ContentService.createTextOutput('');
}
docker volume create -d local -o type=none -o o=bind -o device=/mnt/ssd/pgsql pgsql-data
Or in compose file:
volumes:
pgsql-data:
driver: local
driver_opts:
type: none
o: bind
device: /mnt/ssd/pgsql
As mentioned in the comment above, my specific issue was due to an errant space in the variable names. This was carried over from my actual script into the example script in the question; eliminating the space fixed the issue.
i looked at the pics and obviously your button is not at the location you try to import it from. this makes me sad. you do not even have the slightest idea of programming and ask a question that every AI could answer. please add a second dot to the import.
Yes, this is the intended behaviour.
But you can change it in meta tags or simply set a min-width to the body.
I answered a similar question some time ago in this issue.
Nowadays providers are async factories, which is just a more limited version of factories since factories can already be async. We have plans to remove Providers in v8, so I would suggest to always use factories
I found it. Have to add attributes like this :
android:layout_height="wrap_content"
android:elegantTextHeight="true"
android:minHeight="96dp"
android:singleLine="false"
Now every time I add new line in typing on the bottom line it will increase the Multiline Text height and add new line.
At the top menu bar, go to View and toggle Show only the active ontology .
Just promoting the answer from the comment section of the question to an actual answer as community wiki
did you find answer? Currently I am also facing this error, after 7 years LOL
Basically anything I want from android I pull it into termux through a copy, usually for apps like android html, I use a file manager like cx file Explorer, -> [TAB] Local -> [DIR Widget] APPS -> | #THIS LANDS YOU IN A 'DOWNLOADED' APPS TAB # | -> [DIR SUB TAB] "ALL" |#ALL APPS INSTALLED ON SYSTEM#| -> SCROLL(..4EVER CUZ ANDROID BLOATWARE..) 'APP OF CHOICE' -> PRESS N HOLD ; COPY ; SAVE IN DOWNLOADS ; NAVIGATE TO ANDROID DOWNLOADS --IN CX EXPLORER STILL ; PRESS N HOLD ; {SHARE} ; {TERMUX} + [popup screen] + {Open In Directory} -> '~/downloads' ... Now it's in termux directory 'storage/downloads' folder (A.K.A '/data/data/com.termux/files/home/storage/downloads') , and you're free to use a plethora of tools to really do anything you want to it however you want to it and run it however you decide. For example ; run 'pkg install Apktool' and you can now fully decompile apks semi-automatically with just few commands and the Apktool program literally does the organizational sorting into folders for you (when decompiling apps/apks there will be a lot of small file Somali folder crap so this is a very nice feature) Then I just refactor any code however I want and ram it into whatever tf I wanted that particular code instant similarity for of to be apart and off it goes, refactored inside a brand new program doing whatever tf it may do --completely independent and free of corporate-conglomerates societal shackles... as much as anything can be these days... A few different ways to do that Alpine Linux, Void Linux, hobble such an ability together in any rootfs just with Cmake and a few other package grabs here and there... A little more complex to compile an apk even more to compile an app that'll pass play store standards but that's just cause you gotta download more heavy duty crap like android sdk (ndk) and kotlin and gradle and all that to stick in there..
Not exactly the answer I know you're looking for, but that's how I access those types of things. COOL COMMAND THO! Immediately opened up my wifi tab in my settings !! (running android 10, so above the official sdk 28 supported devices by termux f-droid && github... nobody gaf abt the fredrik play store massacre of an app) But an easy way to find out more specially is pull BOT config out and run it through all possible android cmd loops , plenty of raw training data on 'cs.android.com', or pull an AI terminal tool in like Gemini-Cli , ShellGpt, (also Ollama, Huggingface, Kaggle, Hackliberty and many more have models specifically designed for cli) and just make that thing run basic configs til you run out of free tokens , get fed up with their stupidity, or find the answer you looking for, all just directing a slightly stupid really unaware fast coding technical super genius digital baby.. .. Cheers, happy coding
The RC version release infos of PHP can be found at https://www.php.net/release-candidates.php. To get the data as a JSON string, use the URL https://www.php.net/release-candidates.php?format=json
Code Block:
import { Component } from '@angular/core';
@Component({
selector: 'app-async-demo',
template: `
<h1>Counter: {{ counter }}</h1>
<button (click)="incrementAsync()">Increment Async</button>
`
})
export class AsyncDemoComponent {
counter = 0;
incrementAsync() {
// Change inside setTimeout
setTimeout(() => {
this.counter++; // View may NOT update
console.log('Counter updated:', this.counter);
}, 1000);
}
}
Excelent post, I share my copy.
Unidades de Gobernanza por Operación Común
------------------------------------------
| Operación | Unidades Consumidas |
|---------------------------------------|------------------------------------------|
| Cargar un registro (record.load) | 5 unidades |
| Guardar un registro (record.save) | 20 unidades |
| Eliminar un registro (record.delete) | 20 unidades |
| Crear un registro (record.create) | 20 unidades |
| Buscar registros (search.run) | 10 unidades por cada 1,000 registros |
| Enviar un correo electrónico | 10 unidades |
| Búsqueda rápida (search.lookupFields) | 1 unidad |
| Crear una búsqueda guardada | 5 unidades |
| Realizar una búsqueda paginada | 10 unidades por página |
| Generar un registro dinámico | 20 unidades |
| Invocar una RESTlet/REST API | 10 unidades |
| Enviar un archivo (file.create) | 10 unidades |
| Enviar un log (log.debug, log.audit) | 1 unidad |
I have a situation where I need a field to be used in a calculated one but if the api client/user does not include that field is never selected from the database and for that reason the calculated field is never calculated. I cannot use any HotChocolate attribute in the entity since the project where those Entity (classes) are defined does not have a reference to GraphQL and we want to keep in that way.
In other words, I need to force HotChocolate to always include a property even if the user has not requested and in that way be able to use it from a ExtendObjectType.
I am using Hotchocolate 13.x
The following formula will do what you want:
=IF(G$1<>$F2,IF(INDEX($B$2:$B$6,MATCH($F2,$A$2:$A$6,0))<INDEX($B$2:$B$6,MATCH(G$1,$A$2:$A$6,0)),1,0),"-")
example:
This usually happens after a VS Code update if the Python interpreter or extensions get reset. Try re-selecting your Python interpreter (Ctrl+Shift+P → Python: Select Interpreter) and make sure the Python extension is up to date. A quick reinstall of the extension often fixes the import errors too.
Did you found any solutions for the issue? I'm also stuck on the same error.
Everything is fine...
You don't need to enter the <receiver> section into AndroidManifest. All is done by the attributes on the receiver class.
Changing the order of the imports fixes it. Old:
<script src="MyParentElement.js" type="module"></script>
<script src="MyChildElement1.js" type="module"></script>
<script src="MyChildElement2.js" type="module"></script>
New:
<script src="MyChildElement1.js" type="module"></script>
<script src="MyChildElement2.js" type="module"></script>
<script src="MyParentElement.js" type="module"></script>
Try clearing the cached notification channels.
You can try to use this tool: this is limited to class diagrams, though: https://github.com/jupe/puml2code
You should be good to go with disabling data caching completely. Also, query results and sub-query results caching isn't in Trino (only in Starburst products), so no need to turn that off.
You could go one step further and completely disable metadata caching too, if needed, as mentioned in https://www.starburst.io/community/forum/t/avoid-stale-data-in-starburst-delta-queries-cache-tuning-tips/1301.
-Qunused-argumentsThis is a clang command-line option but not gcc's (or g++'s).
Seeml like you configured with clang but build with g++?
The industry's standard term for this phenomenon is "glitch", introduced into the literature in Cooper & Krishnamurthi's 2006 paper on their reactive system, FrTime. They define a glitch as
where a signal is recomputed before all of its subordinate signals are up-to-date
The major review paper Bainomugisha et al, 2012 A Survey on Reactive Programming catalogues 15 reactive systems of which 4 are found to be glitchy. All of the sound libraries feature some form of "pull" based workflow.
The "push vs pull" distinction is mostly valid in that pure push-based reactive systems are likely to glitch. In practice the modern commodity signals algorithms such as preact-signals use a mixed "push + pull" strategy to optimise traversal of the invalidated graph whilst preserving glitch freedom - these strategies are explained by Ryan Carniato and Reactively.
RxJS is a now relatively rare example of an irredeemably glitchy library as explained in this answer - how to avoid glitches in Rx - it's puzzling what use cases it might be aimed at.
There's a standard posting on this subject here: Terminology: What is a "glitch" in Functional Reactive Programming / RX?
I created a simple opensource app with local smtp server for testing during development:
https://github.com/alinoaimi/ghostmaildev
Please check this blogpost for the resolution for your requirements
https://blog.nviso.eu/2022/05/18/detecting-preventing-rogue-azure-subscriptions/
Thank you
Emily had always dreamed of becoming a writer. Growing up in a small town, she didn’t have many opportunities to pursue her passion. Her family was supportive, but they didn’t have a lot of money to send her to a prestigious school. However, Emily didn’t let these challenges stop her. After finishing high school, she moved to the city to attend a local university. There, she studied literature and worked part-time jobs to support herself.
Her first few years in the city were tough. She had to deal with the stress of studying and working, and sometimes she felt like giving up. But Emily was determined. She spent her free time writing stories, submitting them to magazines, and learning from feedback. Slowly, her hard work began to pay off. She got her first short story published in a local magazine. This success encouraged her to keep writing.
After graduation, Emily continued to write, and eventually, her first novel was published. Today, she is a well-known author with several bestselling books. She believes that her success came from never giving up on her dream, no matter how difficult life was. Emily now inspires young writers by sharing her story and encouraging them to keep following their passions, even when things seem impossible.
You can determine all the corners as a first step. You could, for example, do this by traversing the outline after you first found it or it might just fall off as a byproduct of how you represent your shape in the first place. In your example there would be a corner at x=2to3 and y=2to3 (bottom left corner of the red rectangle). Realistically, you can now probably go on and grow out rectangles from all found corner points, eliminate duplicates and be done with it. This is likely your fastest solution in most circumstances, because the algorithm is quite simple, as long as your total number of points is reasonable.
It might be possible to construct rectangles from your corners and combine them for more optimization. That should only be neccessary, if your number of points is very large or the shape is very complex. Something like constructing all the smallest rectangles formable from your corners (e.g. the small red area). Then you can combine them recursively with each other and incrementally arrive at a set of rectangles ever-increasing in size.
The main takeaway should be: All maximal rectangles must contain at at least one corner at its boundary. So you can reduce your set of possible starting points and make the set of starting points only dependent on the complexity of the shape.
It seems Microsoft did not carry the Dundas Chart library to .NET Core. There are a bunch of third-party libraries out there, like FlexChart.
What's great about FlexChart, is that you can just download a sample from GitHub and start working. It will automatically get any nuget packages you need.
i know this is coming a bit late, but for anyone else visiting this question: i honestly don't think using supabase storage for storing images is the best idea.
here's why - services like supabase storage and amazon s3 are great for storing files like documents, pdfs, videos, audio, backups, or general file storage but they're not really built for handling lots of images, caching them, delivering them via a fast cdn, or performing real-time optimizations if you ever need to.
on the other hand, services like cloudinary are made specifically for images. cloudinary handles resizing, optimization, fast global delivery, and caching automatically. this can save you a lot of headaches as your app grows.
so for small experiments, supabase storage works fine. but if you want something scalable and hassle-free for images, cloudinary is usually the better bet.
Just uninstall and reinstall VSCode. Period.
Make sure you install the same (latest?) version you were using.
Google Chrome portable is using for control your chrome driver version. Your system is updated version but this portal is having particular version only.
I also have this problem now - and no solution :-/
For me the problem was I didn't specify mappedBy in annotation:
@OneToMany(fetch = FetchType.LAZY, mappedBy = "tradePartner", cascade = CascadeType.ALL)
some addons on Alex Morozov answer:
forms.DateField(widget=AdminDateWidget(attrs={"type": "date"}))
in that case i got popup to select date - which is more convenient
Gemini 1.0 (pro) models has retired, that's why you are getting 404. You need to migrate to Gemini 2.0/2.5 and later. See this discussion.
I found much easier to get the item link from the list itself - getting it from first item
Left(First(MyList).'Link to item',Find("/_layouts",First(MyList).'Link to item')-1)
If your browser is set to dark mode but you want sites to always stay light, try this extension: https://chromewebstore.google.com/detail/force-light-mode/mafafghcgiinbpldecfhdchfjemphkid
It overrides the prefers-color-scheme setting so pages load in light mode even when Chrome/system is dark.
So, I finally got the AppIcon back but I had to do a lot to make it happen. In summary, I had to basically delete my subscriptions in AppStore Connect, comment out any subscription stuff in my code, verify the app icon had the right specifications, verify that I was using the correct asset catalog, archive to AppStore Connect, recreate my subscriptions, uncomment out the subscription stuff in my code and change to the new subscription group id, and re-archive to AppStore Connect.
This seems to have been some sort of caching issue with AppStore Connect and it was the only way I could trick it to get the icon back on the .manageSubscriptionsSheet. Below is a more detailed description of the steps that I took to fix this. In conclusion, my recommendation would be to verify that your app icon is correctly formatted with the correct specifications/sizes and uploaded to AppStore Connect prior to adding any subscriptions if possible. In addition, once subscriptions are added, I would NOT change the app icon unless it's absolutely necessary.
Note: For my app, I have two subscriptions within one subscription group.
Steps taken:
Make sure the picture being used is a .png with Color Space=RGB, Color Profile= sRGB IEC61966-2.1, Alpha=No. Use all sizes in the asset catalog, not just single size
Once the icons are in the asset catalog (Asset.xcassets), right click on the 1024x1024 to "Show in Finder", and then "Get Info" to make sure it has the above specifications
Verify that the name of the AppIcon that you're using in the asset catalog is the same one on the main app target page Target -> General -> App Icons and Launch Screens. Also verify that Target -> Build Settings -> Asset Catalog Compliler - Options -> Primary App Icon Set Name has the same AppIcon name.
Verify that there are no "ghost" asset catalogs by using terminal, navigating to your project directory, and entering the command:
find . -name "*.xcassets" -print
In most cases, there should only be one asset catalog listed (besides the one used for Xcode previews).
Delete or comment out all storeKit code from Xcode (I'm not sure this step is necessary but after many attempts, this step finally seemed to work) AND delete the subscriptions (including the group) from AppStore Connect. Note: I had two subscriptions within one subscription group.
Delete any .storekit files that sync with AppStore.
Test and run in simulator to make sure that the code still works without the subscription stuff.
For your project, under Targets -> General, use a new version and build of your app (new version may have somehow forced an update of the app icon. It can be a small jump like going from version 1.0 to 1.0.1
Clean the build folder, build, save project, exit Xcode, delete derived data, open project again, clear build folder again, build, and archive to AppStore connect. Be sure that the scheme you are using does NOT have a reference to any .storekit file. Edit scheme -> Options -> StoreKit Configuration file -> "none". It should be "none" for Testflight and Release builds
Once the build uploads to AppStore Connect, go to the main Dstribution tab, and under "Build", select the build that you just uploaded and hit the Save button.
Now put back/uncomment all the StoreKit code in your project
Redo your subscriptions in AppStore Connect. Note: you will now have a new subscription group id.
Each subscription within the group should have a status of "Ready to Submit" and localizations should say "Prepare for Submission". If it says metadata missing, you are missing one of the following: localizations (either for the individual subscriptions or for the group), availability or pricing info, or the “Review Information” screenshot/notes. It must not have metadata missing or this will not work.
Edit your code and swap the new subscription group id for the old one.
Generate a new .storekit file that syncs to AppStore.
Edit scheme -> Options and select that Storekit Configuration file (this is so you can test with simulator to verify that everything works before archiving a new build)
Run and test in simulator to make sure that all your storekit stuff is working again.
Now edit scheme again and set Storekit Configuration file to "none"
Now archive and upload again to AppStore Connect
Once the build is there under TestFlight, go back to the main distribution tab (under "Build"), select the build that was just uploaded and hit the Save button. After hitting save, under "In-App Purchases and Subscriptions", select your subscriptions
Delete the old app version from whatever physical device you are using to test and re-download the new version. When you run the app, your subscription will show up as new because of the new group id. When you buy or manage the subscription, you should see the app icon.
Include Controller is designed to work with external JMX files.
If you want to re-use a part of a test plan (Test Fragment) in multiple locations of your .jmx script - go for Module Controller.
More information: Using JMeter Module Controller
I have this problem with directly accessing sogo on the internet, and through a proxy. I keep getting webresources as 404 or 403. There is no documentation to guide for this problem
We now have the command
Jupyter: Export Current Python File as Jupyter Notebook
accessible from the command palette or the editor context menu, which works like a charm for files with code cell (#%%) annotations.
For anybody who comes to this old post via a search: the Internet is full of lists of things to try if an app fails to install. What is missing is the standard troubleshooting approach: what is the reason for the failure?
This is just a one-off anecdote, I don't know how general it is. I was having repeated failure for an update for one particular app from the Play Store. I downloaded the .apk and used the Total Commander file manager to try to install it. It failed, but with a very specific error message. (In my case the update required a permission which clashed with another app; the only thing I could do was notify the app's owner and hope that the clash could be resolved, anything else would be a waste of time.)
While not directly answering the question "what are the common reasons", this, if it works more generally, is a way to find the actual reason for a failure without having to make troublesome random tests doomed to failure.
Changed it to gateway = Gateway("http://host.docker.internal:8000") and it worked. But with a different error this time.
For those who are struggling with threads when testing with ActiveSupport::CurrentAttributes. It may be better to mock the Current call so you don't have to mess around with threads at all.
Current.stubs(:user).returns(user)
docker build -t sqlfiddle/appservercore:latest appServerCore
docker build -t sqlfiddle/varnish:latest varnish
docker build -t sqlfiddle/appdatabase:latest appDatabase
docker build -t sqlfiddle/hostmonitor:latest hostMonitor
docker build -t sqlfiddle/mysql56host:latest mysql56Host
docker build -t sqlfiddle/postgresql96host:latest postgresql96Host
docker build -t sqlfiddle/postgresql93host:latest postgresql93Host
docker build -t sqlfiddle/mssql2017host:latest mssql2017Host
for me I was using wrong language code for Chinese its "zh" instead of "cn"
As of September 24, 2025 (a day before your testing), Gemini 1.5 (pro and flash) were retired and became a legacy model. That's why you are getting the 404 error when using the retired models. You should migrate to Gemini 2.0/2.5 Flash and later instead. Please follow the latest Gemini migration guide to migrate.
late to the party but as i ran into the same i found this in the swiper docs:
https://swiperjs.com/swiper-api#mousewheel-control
answer by @mikrec works perfectly but if inside tag you can also do the following:
<Swiper
modules={[Mousewheel]}
mousewheel={{ forceToAxis: true }}
</Swiper>
I run in the same error and the Solution was, that the debug listener was on. After stop the listener all PHP version checkings worked directly
you should extend from django.contrib.auth.admin.UserAdmin instead of ModelAdmin That class already defines the proper filter_horizontal, fieldsets, and add_fieldsets
reference : https://docs.djangoproject.com/en/5.2/topics/auth/customizing/#a-full-example
from django.contrib import admin
from django.contrib.auth.admin import UserAdmin as BaseUserAdmin
from .models import User
@admin.register(User)
class UserAdmin(BaseUserAdmin):
pass
Have you considered trying into with only one full outer join? because you're potentially reading each row of each table twice
A small anonymous data sample of your issue could help in verifying this
Mark your setup method or helper as not transactional, so it persists data once and survives across test methods:
Here the trick is: don’t put @Transactional on the class. Instead, only annotate test methods that need rollback.
@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(locations = { ... })
public class DatabaseFilterTest {
private static boolean setUpIsDone = false;
@Autowired
private YourRepository repo;
@Before
public void setUp() {
if (!setUpIsDone) {
insertTestData(); // persists real data, not rolled back
setUpIsDone = true;
}
}
@Transactional // applied only at method level if needed
@Test
public void testSomething() {
...
}
@Rollback(false) // ensures data is committed
@Test
public void insertTestData() {
repo.save(...);
}
}
One additional option is not to specify a fixed height or minimum height for the modal.
i had similar issue, but the senario was the modal was to behave like a sidebar and had text input in this, and when i tried tu use the text input the modal sifted above the screen even thoufh the scroll was working if the there was some content outside the scroll view then that was not visible at all so in this removing the fixed hight of the modla helped me to make this work.
Hope this also helps someone else.
^-?(?:[^\,]*,*\,){0}\K(?P<q>[^\,]*)
Nobody says it, but the commented example here shows that we should use tcp:// in the proxy protocol, while cURL accepts http... So this minimal script fixed the error:
$url = 'https://getcomposer.org/versions';
$opts = [
'http' => [ 'proxy' => 'tcp://proxy.redecorp.br:8080' ]
];
$context = stream_context_create($opts);
echo file_get_contents($url,false,$context);
Your executable should be python3 and -m pip should go to the extra_args part with the --user. In your way Ansible tries to execute literally a file called "python3 -m pip" which obviously does not exist.
Im my opinion you should just draw random lines following a normal distribution and then check. If I did my calculations right that should have an O(n^-1) complexity.
you are welcome
It works! Thanks so much!!!
I got it — I thought the event I triggered was the only one, but I realize now that there are multiple events available.
Thanks again!
from the documentation here:
auto_unbox: automatically unbox() all atomic vectors of length 1.
It is usually safer to avoid this and instead use the unbox()
function to unbox individual elements.
An exception is that objects of class AsIs (i.e. wrapped in I()) are not
automatically unboxed. This is a way to mark single values as length-1 arrays.
Have you tried wrapping your field with class AsIs ?
if you trying to learn kubectl and you're pods fall with this issue may be you need create master node, on mac m1 it's work for me that
brew install kind
kind create cluster --name dev
What worked for me (and seems to be the designed solution) is to havigation to settings->languages & frameworks->template data languages. The select HTML from the Project Menu drop down and select the directory where the templates are stored. In my case, ONLY templates are in this directory. I don't know how it will work with mixed content.
I had to go to the 3 dots and Edit:
Then I could see the Update app button:
The hate is big in me now.
Test runner was missing for mine.
<PackageReference Include="xunit.runner.visualstudio" />
use the react-native-background-actions
https://www.npmjs.com/package/react-native-background-actions
xxfssc_dm_staging@PDBEPP> SET TERMOUT OFF
xxfssc_dm_staging@PDBEPP> /
XXFSSC_SUPPLIER_END_DATES_TRG
PK_XXFSSC_SUPPLIER_END_DATES_TRG
XXFSSC_COA_SEGMENT3_REF
......doesn't work
Just change the /go directory to devicemanager or change your package to go inside your code and your are good to go.
I had this problem on a M4 Mac and found that lowering the MTUs on my network connection from 1500 to 1280 did the trick, regardless of IPv6 being enabled. I hope I save someone else some trouble figuring this out! It took me hours.
oiugbyuw3ed7tutjyhhm cxfdfthtytg nd hdbvexdrtsyjjthddsgr\jhgg.lugsrugtbrswhdckuyr65rscgfdxmu8566urjysyt23y4EJCDSAJLOHGKIJHEGTUDBK,BHFVVCNHBFRVGEHFHWHFDYWF,AM SGBCDGHEGDUEYFfkudxsfkufedfaffhcdfjdytyr7oyghttgrvfHJKGFUGREGYEGREGFEKYURDURRSHSBMJUXCJYTDGVMFEDX MJJCV,SXMNSDXVFSWGFHDHKMFASHMGADXHCMGXSJKBKBKBKBKBKBKBVSJHGFD,WQFDYTRK
I ran into problems some time ago since most Linux distributions chose to make /bin a symlink to /usr/bin (and sbin and lib respectively), and you have defined both /bin and /usr/bin in your PATH variable. cmake then generates double definitions, which you can recognize by having a double slash // in the path name.
You can fix your build.ninja file with some simple sed scripting, and you should fix your PATH definition.
Menu File -> Repair IDE worked for me.
I ran into the same scroll restoration issue in my Next.js project and spent quite a bit of time figuring out a reliable solution. Since I didn’t find a complete answer elsewhere, I want to share what worked for me. In this post, I’ll first explain the desired behavior, then the problem and why it happens, what I tried that didn’t work, and finally the custom hook I ended up using that solves it.
My setup
pages/_app.tsx and pages/_document.tsx)<Link /> and router.push() → so the app runs as a SPA (single-page application).How to persist and restore scroll position in Next.js (Page Router) reliably?
Desired behavior
When navigating back and forward between pages in my Next.js app, I want the browser to remember and restore the last scroll position. Example:
The problem
By default, Next.js does not restore scroll positions in a predictable way when using the Page Router.
Important Note: Scroll behavior in Next.js also depends on how you navigate:
Using
<Link>from next/link or router.push → Next.js manages the scroll behavior as part of SPA routing. Using a native<a>tag → this triggers a full page reload, so scroll restoration works differently (the browser’s default kicks in).Make sure you’re consistent with navigation methods, otherwise scroll persistence may behave differently across pages.
Why this happens
The problem is caused by the timing of rendering vs. scrolling.
What I tried (but failed)
experimental.scrollRestoration: true in next.config.jsWorks in some cases, but not reliable for long or infinite scroll pages. Sometimes it restores too early → content isn’t rendered yet → wrong position.
requestAnimationFramerequestAnimationFrame(() => {
requestAnimationFrame(() => {
window.scrollTo(x, y);
});
});
Works for simple pages but fails when coming back without scrolling on the new page (lands at bottom).
3.Using setTimeout before scrolling
setTimeout(() => window.scrollTo(x, y), 25);
Fixes some cases, but creates a visible "jump" (page opens at 0,0 then scrolls).
The solution that works reliably in my case
I ended up writing a custom scroll persistence hook. I placed this hook on a higher level in my default page layout so it's triggered once for all the pages in my application.
It saves the scroll position before navigation and restores it only when user navigates back/forth and the page content is tall enough.
import { useRouter } from 'next/router';
import { useEffect } from 'react';
let isPopState = false;
export const useScrollPersistence = () => {
const router = useRouter();
useEffect(() => {
if (!('scrollRestoration' in history)) return;
history.scrollRestoration = 'manual';
const getScrollKey = (url: string) => `scroll-position:${url}`;
const saveScrollPosition = (url: string) => {
sessionStorage.setItem(
getScrollKey(url),
JSON.stringify({ x: window.scrollX, y: window.scrollY }),
);
};
const restoreScrollPosition = (url: string) => {
const savedPosition = sessionStorage.getItem(getScrollKey(url));
if (!savedPosition) return;
const { x, y } = JSON.parse(savedPosition);
const tryScroll = () => {
const documentHeight = document.body.scrollHeight;
// Wait until content is tall enough to scroll
if (documentHeight >= y + window.innerHeight) {
window.scrollTo(x, y);
} else {
requestAnimationFrame(tryScroll);
}
};
tryScroll();
};
const onPopState = () => {
isPopState = true;
};
const onBeforeHistoryChange = () => {
saveScrollPosition(router.asPath);
};
const onRouteChangeComplete = (url: string) => {
if (!isPopState) return;
restoreScrollPosition(url);
isPopState = false;
};
window.addEventListener('popstate', onPopState);
router.events.on('beforeHistoryChange', onBeforeHistoryChange);
router.events.on('routeChangeComplete', onRouteChangeComplete);
return () => {
window.removeEventListener('popstate', onPopState);
router.events.off('beforeHistoryChange', onBeforeHistoryChange);
router.events.off('routeChangeComplete', onRouteChangeComplete);
};
}, [router]);
};
Final note
I hope this solution helps fellow developers who are facing the same scroll restoration issue in Next.js. It definitely solved a big headache for me. But still I was wondering if anyone found a more “official” or simpler way to do this with Page Router, or is this kind of approach still the best workaround until Next.js adds first-class support?
My solution
nvm version: 0.40.3
nvm alias default 22
nvm use default
Adding the following to my bashrc or doing go env -w works
export GOPROXY=https://proxy.golang.org,direct
I prefer to use this chrome extenstion for djvu to pdf conversion.
For what it's worth, I was able to retrieve the device registry using the web socket api.
Base docs: https://developers.home-assistant.io/docs/api/websocket/
Then send a message with type "config/device_registry/list" (or entity_registry etc.)
Problem solved, I just changed the IP Address on my Ethernet 6, from 192.168.1.160 to 192.168.1.162
Use it as follows to get a completion
1. create global.d.ts at resources/js with the following
import Echo from 'laravel-echo';
import Pusher from 'pusher-js';
declare global {
interface Window {
Pusher: Pusher;
// replace "pusher" with "ably", "reverb" or whatever you're using
Echo: Echo<"pusher">;
}
}
2. access it via window.Echo and you'll get a completion list
From (I think) v2.6.0, torch.accelerator.current_accelerator() allows the device to be identified within a function without it being passed as a parameter.
This release added the torch.accelerator package, which allows easier interaction with accelerators / devices.
open xcode go to pods and click on every installed pod and check IPHONEOS_DEPLOYMENT_TARGET , and set it to 12.4
The examples you provided illustrate two different concepts in data manipulation: interpolation in pandas and generating a linear space in numpy, but they serve different purposes and are not equivalent, although they can produce similar results in certain cases.
1. **`numpy.linspace`**: This function is used to generate an array of evenly spaced numbers over a specified interval. In your example, `np.linspace(1, 4, 7)` generates 7 numbers between 1 and 4, inclusive. It does not consider any existing data points within the range; it simply calculates the values needed to create an evenly spaced set of numbers.
2. **`pandas.Series.interpolate`**: This method is used to fill in missing values in a pandas Series using various interpolation techniques. In your example, you start with a Series containing a value at index 0 (1), NaN values at indices 1 through 5, and a value at index 6 (4). When you call `.interpolate()`, pandas fills in the NaN values by estimating the values at those indices based on the values at the surrounding indices. With the default method, 'linear', it performs linear interpolation, which in this case results in the same values as `np.linspace(1, 4, 7)`.
While the results are the same in this specific example, `np.linspace` is not doing interpolation in the sense that it is estimating missing values between known points. Instead, it is creating a new set of evenly spaced values over a specified range. On the other hand, `pandas.Series.interpolate` is estimating missing values within a dataset based on the existing values.
In summary, `np.linspace` is about creating a sequence of numbers, while `pandas.Series.interpolate` is about estimating missing values in a dataset. They can produce the same output in a specific scenario, but they are conceptually quite different.
if you want to remove liquid class effect from UI in iOS 26 then you can use
UIDesignRequiresCompatibility == YES in info.plist in your project
after using this you get UI same as iOS 18 and earlier version
First, take note that according documentation Job Templation - Extra Variables
When you pass survey variables, they are passed as extra variables (
extra_vars) ...
Then, the question becomes "How to use Ansible extra variables with Python scripts?" and minimmal examples could look like Run Python script with arguments in Ansible.
---
- hosts: localhost
become: false
gather_facts: false
vars:
survey_input: test
tasks:
- name: Create example Python script
copy:
dest: survey.py
content: |
#!/usr/bin/python
import sys
print("arguments:", len(sys.argv))
print("first argument:", str(sys.argv[1]))
- name: Run example Python script
script: survey.py {{ survey_input }}
register: results
- name: Show result
debug:
msg: "{{ results.stdout }}"
It will result into an output of
TASK [Show result] *****
ok: [localhost] =>
msg: |-
arguments: 2
first argument: test
Further Q&A
Which might be interesting and help in this Use Case
I encountered a cache issue, and I found a good solution for it.
The problem happens because when a user visits a site, the browser caches the resources. Even if you update the website, the browser may continue showing the old version instead of fetching the new resources.
If changing resource names is not an option (or not possible), you can fix this by adding a query parameter to the URL. For example:
https://yoursite.com/?version=v1
Since query parameters don’t affect the actual site content, this tricks the browser into thinking it’s a new request, so it bypasses the cache and fetches the updated resources.
You are welcome!!!!!!!!!!!!!!!!!!
To make a Keras model's variables trainable within a GPflow kernel, simply assign the model as a direct attribute. This works because modern GPflow automatically discovers all variables within tf.Module objects (like Keras models), and eliminate the need for a special wrapper. Please refer this gist for the example implemented approach.
Easier way is "FireFox". No need any plugins etc
The syntax you want to use was only introduced in v4.1, so earlier versions don't include it.
From TailwindCSS v4.1.0 onwards
With this feature, now have the ability to include critically important classes in the compiled CSS using a syntax similar to the
safelistproperty from v3.
# Force update to TailwindCSS v4.1 with CLI plugin
npm install tailwindcss@^4.1 @tailwindcss/cli@^4.1
# Force update to TailwindCSS v4.1 with PostCSS plugin
npm install tailwindcss@^4.1 @tailwindcss/postcss@^4.1
# Force update to TailwindCSS v4.1 with Vite plugin
npm install tailwindcss@^4.1 @tailwindcss/vite@^4.1
Reproduction in Play CDN:
<!-- At least v4.1 is required! -->
<script src="https://cdn.jsdelivr.net/npm/@tailwindcss/[email protected]"></script>
<style type="text/tailwindcss">
@source inline("{hover:,}bg-red-{50,{100..900..100},950}");
</style>
<div class="text-3xl bg-red-100 hover:bg-red-900">
Hello World
</div>
You may check Integrating Ola Maps in Flutter: A Workaround Guide article for implementing Ola Maps.
// not sure for Uber maps.
If you need other maps, you may check on this package -> map launcher
Brazil Copper Millberry Exporters, Exportateurs de cuivre Millberry du Brésil, Exportadores de cobre Millberry do Brasil, 巴西铜米尔贝里出口商, Exportadores de cobre Millberry de Brasil, مصدرو النحاس ميلبيري من البرازيل
#MillberryCopper #CopperExport #BrazilCopper #CuivreDuBrésil #ExportaçãoDeCobre #巴西铜出口 #CobreBrasil #نحاس_البرازيل #GlobalCommodities #NonFerrousMetals #BulkCopperTrade #InternationalCopper #CopperForIndustry #RefinedCopper #CopperWireScrap #CopperCathodes #LatinAmericaExports #CommodityDeals #VerifiedExporters #MillberryGrade #CopperBusiness #WorldwideShipping #SecureTrade #BulkMetals #CopperSuppliers #BrazilianCopper
https://masolagriverde.com/ https://masolagriverde.com/about/ https://masolagriverde.com/shop/ https://masolagriverde.com/agro-commodities/ https://masolagriverde.com/livestock/ https://masolagriverde.com/fertilizers/ https://masolagriverde.com/scrap-metals/
This tool can help: https://salamyx.com/converters/po-untranslated-extractor/ Everything happens in the browser interface. You don't need to install or launch any additional utilities.
Short answer: The only way is to create widgets similar to the JVM page based on your exported(third-party/OTel) JVM metrics.
Long answer: The JVM metrics page under APM does not show data from third-party agents or instrumentation. When using the NewRelic Java agent, it works as expected, but as soon as we switched to OTel collector, we are unable to see any JVM metrics.
Then, we started exporting JVM metrics through JMX: io.opentelemetry.instrumentation:opentelemetry-jmx-metric
However, the data is now available in the Metric collection in NR, but not still visible on the JVM page. Apparently, the NR Java agent sends some proprietary data that you cannot send :(
I guess I found your issue.
Can you go to your index and check if you have added a the text embedding model there as well?
Make sure you add the adda-002 model as others wont work in the playground/free tier, after you added the model, it takes like 5 minutes and then it should not be grayed out anymore
Here I show where you have to add it: enter image description here
Wendel got it almost right, but the pointer stars are wrong.
Here is my tested proposal:
FILE *hold_stderr;
FILE *null_stderr;
hold_stderr = stderr;
null_stderr = fopen("/dev/null", "w");
stderr = null_stderr,
// your stderr suppressed code here
stderr = hold_stderr;
fclose(null_stderr);
To run the `dpdk-helloworld` app without errors with the mlx5 driver for a Mellanox card, I did this:
Sources:
sudo setcap cap_dac_override,cap_ipc_lock,cap_net_admin,cap_net_raw,cap_sys_admin,cap_sys_rawio+ep dpdk-helloworld
To showcase the security attributes, we've modified the sample source code to evaluate the method triggering the error in order to show it's security attributes:
Type type = new Microsoft.Ink.Recognizers().GetType();
//get the MethodInfo object of the current method
MethodInfo m = type.GetMethods().FirstOrDefault
(method => method.Name == "GetDefaultRecognizer"
&& method.GetParameters().Count() == 0); ;
//show if the current method is Critical, Transparent or SafeCritical
Console.WriteLine("Method GetDefaultRecognizer IsSecurityCritical: {0} \n", m.IsSecurityCritical);
Console.WriteLine("Method GetDefaultRecognizer IsSecuritySafeCritical: {0} \n", m.IsSecuritySafeCritical);
Console.WriteLine("Method GetDefaultRecognizer IsSecurityTransparent: {0} \n", m.IsSecurityTransparent);
That code generates this output:
Method GetDefaultRecognizer IsSecurityCritical: False
Method GetDefaultRecognizer IsSecuritySafeCritical: False
Method GetDefaultRecognizer IsSecurityTransparent: True
Because the method in the Microsoft.Ink library is processed as SecurityTransparent, and errors are arising, it needs to be tagged as either SecurityCritical or SecuritySafeCritical.
Is there anything we can do at our code level?
I want to add another answer to this old question, because I prefer the use of basic tools (available on most systems) and I’d like to clarify why the trivial approach doesn’t work.
Why trivial approach will not work?
cat sample.json | jq '.Actions[] | select (.properties.age == "3") .properties.other = "no-test"' >sample.json
At first glance this looks fine but it does not work.
First of all, problem with overriding file comes from >sample.json not the jq itselft.
When you use >sample.json, the shell immediately opens the file for writing before jq starts reading it, which truncates the file to zero length.
How to work around it? Simply use a command that handle output itself (like sed -i wget -O etc. ) not via shell redirections.
cat sample.json | jq '.Actions[] | select (.properties.age == "3") .properties.other = "no-test"' | dd of=sample.json status=none
There is a different setting for disabling all AI features apart from "Hide Copilot". As per the official documentation:
You can disable the built-in AI features in VS Code with the [chat.disableAIFeatures setting](vscode://settings/chat.disableAIFeatures) [The link opens the VS Code setting directly.], similar to how you configure other features in VS Code. This disables and hides features like chat or inline suggestions in VS Code and disables the Copilot extensions. You can configure the setting at the workspace or user level.
[...]
If you have previously disabled the built-in AI features, your choice is respected upon updating to a new version of VS Code
"Hide copilot" and the disable-all-AI-features-setting are different settings in different places. This already caused confusion. The decision to make this all opt-out and seem to be deliberate and final as per this Github issue.
Best I can think of is making a prompt and sending it alongside the columns to an AI:

Prompt:
I have a two-column table where the data is mixed up:
The left column should contain Job Titles.
The right column should contain Company Names.
But in many rows, the values are swapped (the job title is in the company column and the company is in the job title column).
Your task:
For each row, decide which value is the Job Title and which is the Company Name.
Output a clean table with two columns:
Column 1 = Job Title
Column 2 = Company Name
If you are uncertain, make your best guess based on common job title words (e.g., “Engineer”, “Manager”, “Developer”, “Director”, “Intern”, “Designer”, “Analyst”, “Officer”, etc.) versus typical company names (ending with “Inc”, “Ltd”, “LLC”, “Technologies”, “Solutions”, etc.).
Keep the table format so I can paste it back into Google Sheets.
Here is the data that needs to be corrected:
COL A:
...
COL B:
...
I think you can use headerShown: false option from the Stack. For example:
import { Stack } from 'expo-router';
export default function Layout() {
return (
<Stack>
<Stack.Screen name="(tabs)" options={{ headerShown: false }} />
</Stack>
);
}
You should check out this documentation.
https://docs.expo.dev/router/advanced/stack/#screen-options-and-header-configuration
Hiding tabs:
https://docs.expo.dev/router/advanced/tabs/#hiding-a-tab
For Corrected Job Title (Column C):
=IF(REGEXMATCH(A2, "(Engineer|Manager|Developer|Designer|Lead|Intern)"), A2, B2)
For Corrected Company Name (Column D):
=IF(REGEXMATCH(A2, "(Engineer|Manager|Developer|Designer|Lead|Intern)"), B2, A2)