this work for me with pandas
settings.py
APP_BBDD = {
"LOCAL": {
"DNS": "localhost:1521/XE", <- LOOK THIS!!
"USER": "USERNAME",
"PASS": "holaqase",
}
}
And
import pandas as pd
import oracledb
import settings
def check_bbdd(environment = "LOCAL"):
"""
check bbdd
"""
df = _get_df("SELECT * FROM TABLE_NAME",environmet)
print(df.head())
return df
def _get_df(query, environmet = "LOCAL"):
with oracledb.connect(
user=settings.APP_BBDD[environment]["USER"],
password=settings.APP_BBDD[environment]["PASS"],
dsn=settings.APP_BBDD[environment]["DNS"],
) as conn:
return pd.read_sql(query, conn)
And one test:
😘
Same here. Apparently Grok and ChatGPT suggestions are to stop using Expo Go entirely and use expo-dev-client - which is far more cumbersome and heavy to do.
package.json:
"dependencies": {
"@expo/vector-icons": "^14.0.2",
"@react-native-async-storage/async-storage": "2.1.2",
"@react-native-community/datetimepicker": "8.3.0",
"@react-native-community/netinfo": "11.4.1",
"@react-native-community/slider": "4.5.6",
"@react-native-picker/picker": "2.11.0",
"@react-navigation/bottom-tabs": "^7.3.10",
"@react-navigation/native": "^7.1.6",
"buffer": "^6.0.3",
"date-fns": "^4.1.0",
"dotenv": "^16.5.0",
"expo": "~53.0.5",
"expo-constants": "~17.1.5",
"expo-device": "~7.1.4",
"expo-haptics": "~14.1.4",
"expo-linear-gradient": "~14.1.4",
"expo-notifications": "~0.31.1",
"expo-status-bar": "~2.2.3",
"firebase": "^11.6.1",
"react": "19.0.0",
"react-hook-form": "^7.54.2",
"react-native": "0.79.2",
"react-native-calendars": "^1.1310.0",
"react-native-gesture-handler": "~2.24.0",
"react-native-safe-area-context": "5.4.0",
"react-native-screens": "~4.10.0",
"unique-names-generator": "^4.7.1"
},
"devDependencies": {
"@babel/core": "^7.25.2",
"@types/react": "~19.0.10",
"typescript": "~5.8.3"
},
firebaseConfig.ts:
import { initializeApp } from 'firebase/app';
import { getFirestore } from 'firebase/firestore';
import { getStorage } from 'firebase/storage';
import { getAnalytics } from "firebase/analytics";
import Constants from 'expo-constants';
import { getAuth, initializeAuth, getReactNativePersistence } from 'firebase/auth';
import AsyncStorage from '@react-native-async-storage/async-storage';
const firebaseConfig = {
apiKey: Constants.expoConfig.extra.firebaseApiKey,
authDomain: Constants.expoConfig.extra.firebaseAuthDomain,
projectId: Constants.expoConfig.extra.firebaseProjectId,
storageBucket: Constants.expoConfig.extra.firebaseStorageBucket,
messagingSenderId: Constants.expoConfig.extra.firebaseMessagingSenderId,
appId: Constants.expoConfig.extra.firebaseAppId,
};
// Initialize Firebase
const app = initializeApp(firebaseConfig);
const analytics = getAnalytics(app);
export const firestore = getFirestore(app);
export const storage = getStorage(app);
// Initialize Auth with persistence
export const auth = initializeAuth(app, {
persistence: getReactNativePersistence(AsyncStorage),
});
persistence has no bearing in it, I have tried everything.
Yeah, this actually comes up a lot when training a tokeniser from scratch. Just because a word shows up in your training data doesn’t mean it will end up in the vocab. It depends on how the tokeniser is building things.
Even if “awesome” appears a bunch of times, it might not make it into the vocab as a full word. WordPiece tokenisers don’t just add whole words automatically. They try to balance coverage and compression, so sometimes they keep subword pieces instead.
If you want common words like that to stay intact, here are a few things you can try:
Increase vocab_size to something like 8000 or 10000. With 3000, you are going to see a lot of splits.
Lowering min_frequency might help, but only if the word is just barely making the cut.
Check the text file you're using to train. If “awesome” shows up with different casing or punctuation, like “Awesome” or “awesome,”, it might be treated as separate entries.
Also make sure it’s not just appearing two or three times in a sea of other data. That might not be enough for it to get included.
Another thing to be aware of is that when you load the tokeniser using BertTokenizer.from_pretrained(), it expects more than just a vocab file. It usually looks for tokenizer_config.json, special_tokens_map.json, and maybe a few others. If those aren't there, sometimes things load strangely. You could try using PreTrainedTokenizerFast instead, especially if you trained the tokeniser with the tokenizers library directly.
You can also just check vocab.txt and search for “awesome”. If it’s not in there as a full token, that would explain the split you are seeing.
Nothing looks broken in your code. This is just standard behaviour for how WordPiece handles vocab limits and slightly uncommon words. I’ve usually had better results with vocab sizes in the 8 to 16k range when I want to avoid unnecessary token splits.
there is an open expo issue related expo router. you can follow this link https://github.com/expo/expo/issues/36375
For real-time synchronization of products and inventory between two Odoo instances:
Option 1: Cron Jobs (Easiest)
Syncs data periodically (e.g., every few minutes).
Pros: Easy to implement, flexible, less complex.
Cons: Not real-time, potential for conflicts if updates happen simultaneously.
Option 2: Database Replication (Complex)
Keeps data synchronized in real-time at the database level.
Pros: Real-time updates, ensures consistency.
Cons: Complex to set up and manage, requires advanced knowledge, potential for replication issues.
Recommendation: If real-time updates are crucial, go for Database Replication. If a small delay is acceptable, Cron Jobs can be a simpler solution.
Revisiting this again.
Actually, my previously accepted answer was not what it ended up being.
When using the MCP23017, I noticed that the GPIOA / GPIOB registry is very undesireable to poll when OUTPUTS are changed; but rather it is very consistent on INPUT changes.
Instead of polling GPIOA/GPIOB for output status, I instead wrote to OLATA / OLATB which forces the chip to be in that state. I am not saying it will be 100% right, but it has lead me to far greater success. I hope this backtrack will help you in the future if needed.
Sadly, this is considered an cheat and code-injection inside Roblox witch breaks Roblox ToS. If you could print to the console, that would mean that you could also change your players walk-speed etc because everything you are doing is from the client-side.
This means if you achieved to print "hello", then you could also do client things like moving your character, flying, jumping really high etc., but you can't affect other players. If you tried to change the color of a part for example, only you would see it, not others.
Anyways everything that you are trying to do is an Exploit or cheat because it interacts with the Roblox Client in a malicious way, injecting and executing code. Also SynapseX is a paid cheat for Roblox that can perform more advanced things, still not Server-Side.
Only way you can interact with the Client without breaking ToS is changing the FPS cap, or adding shaders to the game, thats all.
Just as an extra info, when you have colon after the name of the server, this means the port you are connecting to on that server. It's supposed to be a number between 0 and 65535. This could also be why you couldn't access the routes.
From Gemini: "There are a number of common networking ports that are used frequently. Ports 0 through 1023 are defined as well-known ports. Registered ports are from 1024 to 49151. The remainder of the ports from 49152 to 65535 can be used dynamically by applications."
This is not just applicable to qt configure but to CMake when it does
try_compile-v1
Simple add the flag
--debug-trycompile
You don't need the UUID
{B4BFCC3A-DB2C-424C-B029-7FE99A87C641}
because the constants are defined in the library.
from win32comext.shell import shell
documents = shell.SHGetKnownFolderPath(shell.FOLDERID_Documents)
downloads = shell.SHGetKnownFolderPath(shell.FOLDERID_Downloads)
Oh I forgot expr
option, nevermind
vim.keymap.set(
{ 'n', 'x' },
'<Tab>',
function() return vim.fn.mode() == 'V' and '$%' or '%' end,
{ noremap = true, expr = true }
)
Just found a solution, thanks to @Xebozone
Using Microsoft Identity I want to specify a Return Url when I call Sign In from my Blazor App
Since you've posted your question AWS has launched Same-Region Replication (SRR), in 2019. This would allow you to replicate objects and changes in metadata across two buckets in the same region.
S3 Batch Replication can be used to replicate objects that were added prior to Same-Region Replication being configured.
After many trials with ChatGPT it resolved it, here is it:
// Instead of this:
request.ClientCertificates.Add(new X509Certificate2(CertPath, CertPwd));
// Use this:
request.ClientCertificates.Add(new X509Certificate2(CertPath, CertPwd, X509KeyStorageFlags.MachineKeySet | X509KeyStorageFlags.PersistKeySet));
It was an implementation detail for python 3.6 and lower, for python 3.7 it became a language feature, see this thread in the python mailing list https://mail.python.org/pipermail/python-dev/2017-December/151283.html
Make it so. "Dict keeps insertion order" is the ruling. Thanks!
Maybe
^([^\:]+)?\:?([^\:]+)*\:?([^\:]+)*$
I created a Sample Blazor Server App with Azure Ad B2C by following this Documentation.
I successfully logged in and logged out without any issues.
Below is My Complete code.
Program.cs:
using System.Reflection;
using Microsoft.AspNetCore.Authentication.OpenIdConnect;
using Microsoft.Identity.Web;
using Microsoft.Identity.Web.UI;
using BlazorApp1.Components;
using System.Security.Claims;
namespace BlazorApp1;
public class Program
{
public static void Main(string[] args)
{
var builder = WebApplication.CreateBuilder(args);
var env = builder.Environment;
builder.Configuration
.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
.AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
.AddEnvironmentVariables()
.AddUserSecrets(Assembly.GetExecutingAssembly(), optional: true);
builder.Services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme)
.AddMicrosoftIdentityWebApp(builder.Configuration.GetSection("AzureAdB2C"));
builder.Services.Configure<OpenIdConnectOptions>(OpenIdConnectDefaults.AuthenticationScheme, options =>
{
options.Events = new OpenIdConnectEvents
{
OnSignedOutCallbackRedirect = ctxt =>
{
ctxt.Response.Redirect(ctxt.Options.SignedOutRedirectUri);
ctxt.HandleResponse();
return Task.CompletedTask;
},
OnTicketReceived = ctxt =>
{
var claims = ctxt.Principal?.Claims.ToList();
return Task.CompletedTask;
}
};
});
builder.Services.AddControllersWithViews().AddMicrosoftIdentityUI();
builder.Services.AddRazorComponents()
.AddInteractiveServerComponents()
.AddMicrosoftIdentityConsentHandler();
builder.Services.AddCascadingAuthenticationState();
builder.Services.AddHttpContextAccessor();
var app = builder.Build();
if (!app.Environment.IsDevelopment())
{
app.UseExceptionHandler("/Error");
app.UseHsts();
}
app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseRouting();
app.UseAuthentication();
app.UseAuthorization();
app.MapControllers();
app.UseAntiforgery();
app.MapRazorComponents<App>()
.AddInteractiveServerRenderMode();
app.Run();
}
}
MainLayout.razor:
@inherits LayoutComponentBase
<div class="page">
<div class="sidebar">
<NavMenu />
</div>
<main>
<div class="top-row px-4">
<AuthorizeView>
<Authorized>
Hello @context.User.Identity?.Name!
<a href="MicrosoftIdentity/Account/SignOut">Log out</a>
</Authorized>
<NotAuthorized>
<a href="/MicrosoftIdentity/Account/SignIn">Sign in with your social account</a>
</NotAuthorized>
</AuthorizeView>
</div>
<article class="content px-4">
@Body
</article>
</main>
</div>
<div id="blazor-error-ui">
An unhandled error has occurred.
<a href="" class="reload">Reload</a>
<a class="dismiss"></a>
</div>
appsettings.json:
"AzureAdB2C": {
"Instance": "https://<DomainName>.b2clogin.com/tfp/",
"ClientId": "<clientid>",
"CallbackPath": "/signin-oidc",
"Domain": "<DomainName>.onmicrosoft.com",
"SignUpSignInPolicyId": "<PolicyName>",
"ResetPasswordPolicyId": "",
"EditProfilePolicyId": ""
}
Make Sure to Add Redirect URL in the App registration as shown below:
Output:
Not sure if this will help anyone, but it looks like the token changes at midnight and noon every day. I found that I had to regenerate the token at noon in order to get any of my code working in the afternoon. (This may not be an issue with the code you all are using since you generate the token each time you run your hits against GHIN, but wanted to throw it out there for anyone that may be storing the token and using it later, which is what my code does).
Can also be done using useState in react. On clicking the button, the state changes, and depending on the state we show the textarea.
const [clicked, setClicked] = useState(false);
<Textarea
placeholder="Add Your Note"
className={`${clicked ? "visible": "collapse"}`}
/>
<Button
onClick={(e) => {
setClicked(!clicked);
}}
>
Add Note
</Button>
Try this:
^(.+?)(?::(\d+))?(?::(\d*))?$
I had this problem this week.
And the answer was just set reverse_top_level as true.
This extension Debugger for Chrome (https://marketplace.visualstudio.com/items?itemName=msjsdiag.debugger-for-chrome) has been deprecated as Visual Studio Code now has a bundled JavaScript Debugger (js-debug
) that covers the same functionality, and more (debugs Node.js, Chrome, Edge, WebView2, VS Code extensions, Blazor, React Native) !
To use with node.js read these: https://code.visualstudio.com/docs/nodejs/nodejs-debugging.
Found'em. They can be found at:
Face the same issue today and this is my way to solve.
Root cause is the same: @MockitoSpyBean
requires that you have a bean present that Mockito would be able to spy on. Previously, with @SpyBean
bean was created if none was present.
I tested @ContextConfigure
but it seems break the default auto configure which cause some of the filters/handlers are not loaded.
In this way, I use @Import
at class level, and @MockitoSpyBean
work as expected after.
@WebMvcTest(MyController.class)
@AutoConfigureMockMvc
@Import(MyBeanClass.class) // add this
class MyControllerTest {
@Autowired
MockMvc mockMvc;
@MockitoSpyBean
MyBeanClass myBean;
@Test
void myTest() {
mockMvc.perform(get('xxx'));
// use Spy here.
verify(myBean, times(1)).xxx();
}
}
I have a question: if you disable MSAL, what happens when a logged-in user signs a form with their account? I'm asking because I am also creating end-to-end tests for an Angular application.
Yours is getting converted to string because of those braces @{...}
around your function in code view. Try removing the action and redeclaring the variable it should work. still, if it does not, explicitly use 'createArray(range(0,10))
' function to convert it to an array.
RandomizedSearchCV
can give worse results than manual tuning due to a few common reasons:
Too few iterations – n_iter=10
may not explore enough parameter combinations.
Poor parameter grid – Your grid might miss optimal values or be too coarse.
Inconsistent random seeds – Different runs can yield different results if random_state
isn’t set.
Improper CV splits – Use StratifiedKFold
for balanced class sampling.
Wrong scoring metric – Make sure scoring
aligns with your real objective (e.g., accuracy
, f1
).
try to add the property <property name="net.sf.jasperreports.export.xls.auto.fit.column" value="true"/>
in the reportElement
section and in the paragraph section add <paragraph lineSpacing="1_1_2"/>
, don't forget add in the textField
textAdjust = 'StretchHeight'
If you're here in 2025. Just use Angular 19. It'll reload in-place. Without a full page refresh. You're welcome
On the C# side, make sure the RestClient sends the correct headers:
request.AddHeader("Content-Type", "application/x-www-form-urlencoded; charset=utf-8");
On the PHP side, at the top of your script (before output), force UTF-8 interpretation:
header('Content-Type: text/html; charset=utf-8');
mb_internal_encoding('UTF-8');
Also, ensure your PHP script correctly reads the POST parameters:
$content = json_decode($_POST['content'], true);
Double-check your MySQL connection:
$this->db->exec("SET NAMES utf8mb4");
$this->db->exec("SET CHARACTER SET utf8mb4");
I made my BuildConfig in modules enabled using this article, the article also has a guide on how to improve build speed
https://medium.com/androiddevelopers/5-ways-to-prepare-your-app-build-for-android-studio-flamingo-release-da34616bb946
The issue was indeed related to the apache-airflow-providers-fab package, as suggested by @bcincy's comment.
x-airflow-common:
&airflow-common
# ... other common settings ...
environment:
&airflow-common-env
# ... existing environment variables (including AIRFLOW__CORE__BASE_URL) ...
# Modified this line to add the FAB provider package:
_PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-} apache-airflow-providers-fab==2.0.2
# ... rest of environment variables ...
docker compose down
docker compose up -d
I think this question needs more context, but one thing you could try is making use of Rails Runner.
+91 87557 77796, +91 95686 89357, +91 95540 13165, +91 94571 74884, +91 80120 38786, +91 98378 96100, +91 94123 30686, +91 97190 45294, +91 97197 72513, +91 99177 86392, +91 98342 40677, +91 98973 24185, +91 94571 90772, +91 63986 25524, +91 98377 38585, +91 99172 41653, +91 99271 87883, +91 96907 62144, +91 96754 92261, +91 94123 09103, +91 97585 35997, +91 86250 94685, +91 95998 99864, +91 85528 65736, +91 78278 15994, +91 96392 72140, +91 79061 31834, +91 96905 85136, +91 86507 16839, +91 63628 09817, +91 63950 96838, +91 63951 15270, +91 63972 01837, +91 63972 68653, +91 63977 51658, +91 63981 02240, +91 63988 63109, +91 70117 75544, +91 70172 72090, +91 70176 17767, +91 70554 85235, +91 70555 89357, +91 70557 36024, +91 721 728 6377, +91 721 733 4195, +91 724 803 0008, +91 72519 20111, +91 73009 34001, +91 73023 97196, +91 731 059 0216, +91 731 099 7665, +91 734 900 0129, +91 73510 31647, +91 73515 06076, +91 73519 36786, +91 7417 293 795, +91 7452 904 045, +91 7455 908 425, +91 75000 00422, +91 75000 18486, +91 75050 27092, +91 75050 33461, +91 75052 84627, +91 75053 72659, +91 75799 48911, +91 76683 21635, +91 78179 68756, +91 78300 98393, +91 78305 22840, +91 78389 47410, +91 788 800 3968, +91 78950 14790, +91 78955 26916, +91 79061 02749, +91 79066 81923, +91 79836 97306, +91 80060 09725, +91 80066 68781, +91 80570 87354, +91 8077 929 905, +91 81264 18361, +91 81266 78143, +91 81267 66859, +91 821 896 7700, +91 82728 12383, +91 82798 38720, +91 83830 88347, +91 83848 06678, +91 83950 49381, +91 84330 53724, +91 84331 84648, +91 84331 93681, +91 84455 08325, +91 84479 26124, +91 84496 74052, +91 84778 66430, +91 84779 83763, +91 85348 58937, +91 85350 33103, +91 85957 71615, +91 86288 68784, +91 863 035 3191, +91 863 085 6889, +91 863 094 4758, +91 87553 11496, +91 87553 93649, +91 87554 43950, +91 88648 22048, +91 88648 33412, +91 88829 48018, +91 89097 85103, +91 89230 63516, +91 89582 23220, +91 89582 32458, +91 89583 70086, +91 89587 82994, +91 89791 02297, +91 90124 93631, +91 90125 73882, +91 90126 21047, +91 90127 72332, +91 90258 58570, +91 90273 26250, +91 90273 89289, +91 90452 55612, +91 90455 83369, +91 90458 65785, +91 90580 53643, +91 90588 97682, +91 90841 47786, +91 91055 46813, +91 91197 63429, +91 91491 12634, +91 93115 89765, +91 93898 71844, +91 94100 11589, +91 94100 12636, +91 94110 71394, +91 94116 11529, +91 94116 16273, +91 94116 73896, +91 94118 13872, +91 94119 95499, +91 94121 44490, +91 94124 29791, +91 94126 36208, +91 94518 18125, +91 94573 14095, +91 94588 18948, +91 95367 00458, +91 95482 11312, +91 95570 15786, +91 95571 20143, +91 95574 72557, +91 95684 90993, +91 95685 61428, +91 95688 27485, +91 95689 30764, +91 96201 49333, +91 96274 47200, +91 96276 03899, +91 96344 30596, +91 96346 12651, +91 96347 47123, +91 96392 78310, +91 96393 95536, +91 96394 71745, +91 96394 99786, +91 96395 59208, +91 96399 15342, +91 96545 11910, +91 96734 35466, +91 96753 87915, +91 96903 44962, +91 96906 08460, +91 96907 01140, +91 96909 91996, +91 97196 75927, +91 97197 78670, +91 97200 77515, +91 97200 92444, +91 97209 90529, +91 97562 55078, +91 97565 02382, +91 97567 07737, +91 97567 63615, +91 97580 97371, +91 97584 36199, +91 97588 29064, +91 97591 58029, +91 97592 57998, +91 97594 32444, +91 97597 35322, +91 97598 82920, +91 97603 00706, +91 97603 44301, +91 97604 61431, +91 97606 64133, +91 97610 02603, +91 97615 56908, +91 97617 86440, +91 97618 01195, +91 98088 89517, +91 98219 24622, +91 98224 18477, +91 98370 54097, +91 98370 78504, +91 98371 09891, +91 98371 66760, +91 98372 47692, +91 98374 63815, +91 98376 58612, +91 98377 27987, +91 98378 19119, +91 98378 56293, +91 98714 86733, +91 98732 54422, +91 98755 48400, +91 98970 64813, +91 98979 11434, +91 98979 45407, +91 99112 14984, +91 99113 72284, +91 99115 79608, +91 99172 35073, +91 99173 24185, +91 99176 45305, +91 99179 43196, +91 99271 15236, +91 99272 73812, +91 99276 13099, +91 99278 05631, +91 99278 55786, +91 99279 69535, +91 99531 70863, +91 99903 65206, +91 99973 62696, +91 99975 99023, +91 99977 58067, +91 99978 69442
I had the same error when my nuget packages were set to different versions, Microsoft.SemanticKernel.Connectors.AzureOpenAI was 1.47.0 and other was 1.48.0. Updating to the same, 1.48.0 fixed the issue. Make sufre, all your Nuget packages related SemanticKernel are on the same version.
Thanks @Thom, @Siggemannen for your insights.
I tried in my environment by saving the .csv
file in UTF-8 format and import the data into Azure SQL DB and it works well by saving the data in the same format into my Azure SQL Database successfully as shown in the below output.
I tried to save the below .csv
data with UTF-8 format into Azure SQL DB.
Id,Description
1,"Price is £100"
Output:
try this pnpm dlx tailwindcss@3 init
,it should be ok!
Instead of following the installation:-
winget install Schniz.fnm
fnm install 22
node -v # Should print "v22.15.0".
npm -v # Should print "10.9.2".
Use the Windows Installer(.msi) button
Just use the modulo operator.
import pandas as pd
pd.to_timedelta('-1 days 2:45:00') % pd.Timedelta(hours=24)
It seems that LOCO Translate is working. I created the language files with LOCO Translate by saving them in the languages/plugins/woocommerce-en_US.po
path.
Replaced "Collection from <strong>%s</strong>:
"
with "X2_Collection from_X <strong>%s</strong>:
" for testing,
or "<strong>%s</strong>:
" for remove.
The "Collection from" title will be replaced in "You've got a new order" and "Order has been received" email notifications.
Tested on:
WordPress 6.8.1 (Language: en_US)
WooCommerce 9.8.3
Loco Translate 2.7.2
You can refer to this official document to get the list of followers and following by user ID.
https://api.twitter.com/2/users/{id}/followers
https://api.twitter.com/2/users/{id}/following
GEBCO API – A Simple and Fast Bathymetry Data Access Tool
🔗 GEBCO API Web App: https://gebcoapi-91108147194.us-central1.run.app/
📂 GitHub Repository: https://github.com/kumarsmahmoodi/gebcoapi
🌍 This web application allows easy access to GEBCO bathymetry data through a clean and developer-friendly RESTful API.
🔧 Key Features:
✅ Get elevation for a single point
✅ Retrieve batch elevations for multiple locations
✅ Download gridded depth data in NetCDF format for custom-defined areas
✅ Written in FastAPI with a modern frontend and ready-to-use examples (including MATLAB)
✅ Supports browser access, programmatic calls
can you clarify which is the authentication file that needs to be added to the domain. I'm kind of in a dead end with this issue, my apple pay wallet is not loading.
If you want to make your code stand out on your Blogger blog, using the right formatting tools like a syntax highlighter or code block plugins can really elevate its look. Adding a neat, readable style helps your audience understand the code better. For those interested in saving big, shop sneaker deals is a great way to find exclusive offers on stylish sneakers, perfect for any trendsetter!
document.getElementById('img-rotation').oninput = function() {
// rect.set('angle', this.value); // remove this
rect.rotate(this.value) // add this,value in degree
})
Data are very similar to Resources in CDKTF, you probably just need to find the right import. I'm not sure I've found the exact Azure Subscriptions data you're looking for but hopefully it illustrates the approach.
https://github.com/cdktf/cdktf-provider-azurerm/blob/main/docs/dataAzurermSubscription.python.md
from cdktf_cdktf_provider_azurerm.data_azurerm_subscription import DataAzurermSubscription
...
DataAzurermSubscription(stack, ...)
applicable for dotnet 6+ : reinstalling the same dotnet SDK again resolved my issue
This issue is almost always caused by either:
An old or mismatched VC++ redistributable version
A third-party DLL conflicting due to PATH
pollution
Fixing or isolating the environment usually resolves the problem. Let me know if you need help tracking the conflicting path.
Activating the virtual environment from the VS Code terminal works for me. That is rather than opening a separate terminal, open the virtual environment directory from the VS Code terminal and then activate it (source newenv/bin/activate). Then the virtual environment can be selected by clicking on the top right corner virtual environment icon in a jupyter notebook.
When using the ExprTk library (which is a C++ expression parsing and evaluation library), you may encounter performance issues or even crashes when large or complex expressions are compiled. To limit large expressions from being compiled or to manage their complexity, here are a few strategies you can consider:
1. Expression Length Limit
You can impose a limit on the length of the expression string before attempting to compile it. This is a simple way to ensure that extremely large expressions are not compiled.
std::string expr = "some large expression...";
// Set a maximum length for the expression
size_t max_length = 1000; // Adjust according to your needs
if (expr.length() > max_length) {
std::cerr \<\< "Expression too large to compile" \<\< std::endl;
return;
}
I see nothing incorrect. The node without any incoming token requirements (“START”) will start and await a signal.
ObjectSet(MPrefix+"FIBO_LAB",OBJPROP_FIBOLEVELS,6);
must be modify to
ObjectSet(MPrefix+"FIBO_LAB",OBJPROP_FIBOLEVELS,7);
Both the words 'Desi' and 'Vogue' has associations to adult content. Google AI could be misinterpreting this to show adult content. I would suggest removing these to see if that would allow the prompt.
In my case, there were two files named the same: theme.ts
and theme.d.ts
, and changing theme.d.ts
to something different worked. 🤷
Have you tried using instagram's API? That may have a solution to your problem.
I have written an open protocol emulator for an Atlas Copco torque tool in python, since it's very hard to find a decent one:
Hey did you find out how to do it? I need to figure this out for a project. thanks
DIR_NAME="your_directory_name"
if [ -d "$DIR_NAME" ]; then
rm -rf "$DIR_NAME"
echo "Existing directory '$DIR_NAME' removed."
fi
mkdir -p "$DIR_NAME"
chmod 777 "$DIR_NAME"
echo "Directory '$DIR_NAME' created with 777 permissions."
Put this attribute at least on 1 action method [MapToApiVersion("ur version")]
and it will work.
To see all the python builtin modules, you can simply run
Print(help(modules))
resolved after downloads Microsoft Visual C++ Redistributable latest supported https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170#latest-microsoft-visual-c-redistributable-version
its depend on your machine.
If you have the Angular Dev web browser extension, it can also show you the version.
Otherwise, you can open the inspector, go to Elements, find app-root
and in it there should be an ng-version
property that displays the Angular version.
Looks like there is no straight forward way to do this. I ended up using regular textboxes, saving their values manually in a dict and if on lost focus there was no valid string (int for example) I reverted back to original value.
The reason was my lack of knowledge how Apple handles installed apps.
They are actually folders, where the executable is somewhere within. Treating the dropped app as a folder solved the problem.
SSID,"RMO-NX3%8366%CloudClone",PWD,"65319969",DEV,"false#true#null"
Adding a bit to @tgdavies comment:
After mvn release:prepare
, the settings are stored in a file called release.properties
. The version is stored in a line having the form project.rel.<groupId>:<artifactId>=<version>
. Assuming group and artifact ids should be ignored. Using grep
the line can be extracted using
grep -E '^project.rel\..*=' release.properties
And cut
or awk
can be used to extract the version. Combined it gives:
grep -E '^project.rel\..*=' release.properties | cut -d'=' -f2
This prints the SQL Query in Laravel without a dynamic parameter passed to it.
DB::table('users')->toSql():
2. This prints the SQL Query in Laravel with the dynamic parameter passed to it.
DB::table('users')->toRawSql()
These are the steps that AI gave me (they work for my simplified dataset.
#Pivot longer to gather Q2 and Q3 columns
df_long <- df1 %>%
pivot_longer(cols = starts_with("Q2"), names_to = "Q2", values_to = "Q2_value") %>%
pivot_longer(cols = starts_with("Q3"), names_to = "Q3", values_to = "Q3_value")
# Separate Q1 into multiple rows
df_long <- df_long %>%
separate_rows(Q1, sep = ",\\s*")
# Filter rows to match Q1 with Q2 and Q3 columns using mapply
df_long <- df_long %>%
filter(mapply(grepl, Q1, Q2) & mapply(grepl, Q1, Q3))
# Select and rename columns to match the desired format
df_long <- df_long %>%
select(ID, Q1, Q2_value, Q3_value) %>%
rename(Q2 = Q2_value, Q3 = Q3_value)
For who it might concern, i somehow realized that using the same db for prod and dev generates this error
Refresh everything and separate the db worked for me
I`ve found that answer here:
TextFields with `value`, `onValueChange` parameters do not support `showKeyboardOnFocus` option. You will need to use the new `TextField` that accepts `TextFieldState`.
its the issue with opencv_videoio_ffmpeg<version>.dll file.
i just copied opencv_videoio_ffmpeg454_64.dll (belongs to opencv-python==4.5.4.60) and replaced that with my version (opencv_videoio_ffmpeg4110_64.dll) and renamed it to opencv_videoio_ffmpeg454_64.dll.
my issue got resolved.
you can find this dll file inside the Lib/site-packages/cv2
Microsoft.Office.Interop.Word
is the primary library used to automate and manipulate Word documents (e.g., editing headers, converting to PDF), whereas Microsoft.Office.Core
provides shared Office-related interfaces (like ribbon customization) but cannot handle document-specific tasks. While Interop.Word
works well on desktop systems, it often crashes or behaves unpredictably on servers like Windows Server 2008 R2 because Microsoft does not support Office automation in server-side environments. For stable server-side document processing, consider alternatives like Open XML SDK (for document manipulation) and LibreOffice CLI, Aspose.Words, or Syncfusion (for PDF conversion without needing Word installed).
hey you forgot some double quotes, maybe that is why it did not work? try this:
#page {
width: 1200px;
margin: 0px auto -1px auto;
padding: 15px;
}
#logtable {
width: 800px;
float: left;
}
#divMessage {
width: 350px;
position: relative;
right: -5px;
top: -20px;
}
<div id="page">
<table id="logtable">
[stuff]
</table>
<div id="divMessage">
[more stuff]
</div>
</div>
my version which works :
every parameter in double quotes,
add store = MY
this looks to be consequence of different issues : powershell parameter handling, and netsh
the image attached showcases that this works on server and on client checking the https
You can also use group aggregation functionality (see also https://arrow.apache.org/docs/python/compute.html#grouped-aggregations):
import numpy as np
import pyarrow as pa
from typing import Literal
def deduplicate(table: pa.Table, keys: str | list[str], op: Literal["one", "first", "last"]="one") -> pa.Table:
table=table.append_column('__index__', pa.array(np.arange(len(dt))))
grps=table.group_by(keys, use_threads=(op == "one")).aggregate([('__index__', op)])
table=table.take(grps['__index___'+op])
return table.drop_columns(['__index__'])
For laravel 9 or greater
If jobs table not created run below cmd to create a migration file for jobs table
php artisan queue:table
And then run migration using
php artisan migrate
I think this is a synchronisation issue where .save() is called 2 times simultaneously and JPA doesnt detect that the object exist and tries to insert both simultaneously.
@AmitKumar Can you confirm if this was the issue? What was the solution to your problem?
User will return undefined here:
async jwt({ token, user, account, profile }) {
or here:
async session({ session, token }) {
Because 'user' is only available the first time a user signs in
or when the user object is explicitly passed to the session update method
FROM node:18
WORKDIR /app
COPY package*.json ./ RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
Selenium 4 comes with selenium manager that can manage to download the appropriate drivers based on the existing chrome version. I mean to say mentioning the chrome driver path is required.
A similar issue was reported for Swashbuckle after updating to version 8.0.0. It appears to be caused by the browser caching the old version of swagger-ui.
The solution that worked for me was doing a forced refresh (Ctrl+F5) of the swagger page.
The `a.` prefixed hubId is for BIM360 Team, Fusion Team, or A360, which is not for ACC.
For the ACC one, please also find the `b.` prefixed ones.
Here is the field description in ACC Admin API:
https://aps.autodesk.com/en/docs/acc/v1/reference/http/admin-accounts-accountidprojects-GET/
If you are using JetBrains related IDE (WebStorm for example ), please ensure that you have enabled those options like the below.
It has been resolved thank you for the support and help.
This method will Open the device Native Camera Module for android
await Linking.sendIntent('android.media.action.STILL_IMAGE_CAMERA', [])
Subah:
6:30 AM – 8:00 AM: Math
Focus on solving problems and understanding concepts.
Revise key formulas and work through sample problems.
8:30 AM – 10:00 AM: Chemistry
Study chemical reactions, numericals, and revision of important chapters.
Focus on understanding concepts like acids, bases, and salts.
Shaam:
7:00 PM – 8:00 PM: Hindi + English
Practice writing, grammar exercises, and reading comprehension.
Read short stories or chapters from your syllabus for improvement.
Raat:
9:00 PM – 11:00 PM: Physics
Revise concepts, work on numericals, and review theory.
Focus on unde
rstanding key principles and equations.
GitHub uses its own language parsing module, and sometimes, it makes mistakes. Just write more code to make it easier for the parser to choose your main language, and after some time, GitHub will get it right.
Can't give any info on yGuard. We switched to ProGuard...
So, after a night of searching and reading, I finally found the solution (I feel stupid about it now):
Text(
text = reader.value[viewModel.currentPage],
style = LocalTextStyle.current.copy(
textAlign = TextAlign.Justify,
textIndent = TextIndent(
firstLine = 35.sp,
),
hyphens = Hyphens.Auto,
lineBreak = LineBreak.Paragraph.copy(
strategy = LineBreak.Strategy.HighQuality,
strictness = LineBreak.Strictness.Strict,
wordBreak = LineBreak.WordBreak.Phrase,
),
fontSize = MaterialTheme.typography.bodyLarge.fontSize,
lineHeight = MaterialTheme.typography.bodySmall.lineHeight,
letterSpacing = TextUnit.Unspecified,
),
modifier = Modifier
.fillMaxWidth(1f)
.border(1.dp, Color.White),
)
I added letterSpacing = TextUnit.Unspecified
and everything now as I want
In PyCharm, the accepted answer works but instead of
streamlit.cli
use
streamlit.web.cli
as module (Streamlit v.1.45.0).
This library really helps me to easy extract data in to excel
There was a problem from server side.. Server restricted some characters, so when I am adding that content, it was considered as SQL injection by server, and it is giving Internal Server Error. And not generating log of it in error_log.
As all the solutions given by developers not working, so I finally reached to server for the support. As that code was working with previous server. They have solved it from their side.
So, if anyone facing this type of problem after migrating and even code also right. Then please talk with server support.
Thank you very much all of developers. who have supported this question and gave me solutions regarding this.
You can consider using Yjs subdocument to do this, the can load different document without reconnect the websocket. the official y-websocket did not implement subdocument by default. this repo https://github.com/RedDwarfTech/texhub-broadcast/blob/main/src/websocket/conn/socket_io_client_provider.ts implement subsocument. This is the official discuss board issue to talk about how to implement the subdocument:https://discuss.yjs.dev/t/extend-y-websocket-provider-to-support-sub-docs-synchronization-in-one-websocket-connection/1294
You can do this by setting the variant on the drawer to persistent. It defaults to temporary.
<Drawer
variant="persistent"
open={open}
onClose={onClose}
>
See the drawer example here:
The problem is that you use a script attribute with an inline event handler 'onload="test()3"' (or 'onload="test3()"'). Script attributes are not nonceable elements. You should add this with an event listeners instead, or add its hash and 'unsafe-hashes' to script-src.
For Inspiration:
This is a simple proof of concept to find the minimal number of sets to cover these 3-number-sequences using Cartesian product blocks.
It checks all possible subsets of the input to identify those that form complete Cartesian blocks, then recursively searches for minimal sets of such blocks that cover the entire input.
The approach is brute-force and not optimized:
from itertools import combinations, product
from collections import defaultdict
input_combinations = {
(1, 3, 5),
(1, 4, 5),
(1, 8, 5),
(2, 4, 5),
(3, 4, 5),
(2, 4, 7)
}
# Check if a set of tuples forms a Cartesian product
def is_cartesian_block(subset):
zipped = list(zip(*subset))
candidate = tuple(set(z) for z in zipped)
generated = set(product(*candidate))
return generated == set(subset), candidate
# Find all possible Cartesian blocks within a set of combinations
def find_all_blocks(combos):
all_blocks = []
combos = list(combos)
for r in range(1, len(combos)+1):
for subset in combinations(combos, r):
ok, block = is_cartesian_block(subset)
if ok:
all_blocks.append((block, set(subset)))
return all_blocks
# Recursive search for all minimal covers
def search(combos, blocks, current=[], results=[]):
if not combos:
normalized = frozenset(frozenset(map(frozenset, block)) for block in current)
if not results or len(current) < len(results[0][0]):
results.clear()
results.append((current, normalized))
elif len(current) == len(results[0][0]):
if normalized not in {n for _, n in results}:
results.append((current, normalized))
return
for block, covered in blocks:
if covered <= combos:
search(combos - covered, blocks, current + [block], results)
def find_all_minimal_decompositions(input_combinations):
all_blocks = find_all_blocks(input_combinations)
results = []
search(set(input_combinations), all_blocks, [], results)
return [sol for sol, _ in results]
solutions = find_all_minimal_decompositions(input_combinations)
for i, sol in enumerate(solutions, 1):
print(f"Solution {i}:")
for row in sol:
print(f" {row}")
Output:
What was/is your use case?
Solution 1:
({2}, {4}, {7})
({2, 3}, {4}, {5})
({1}, {8, 3, 4}, {5})
Solution 2:
({2}, {4}, {7})
({1}, {8, 3}, {5})
({1, 2, 3}, {4}, {5})
Solution 3:
({3}, {4}, {5})
({2}, {4}, {5, 7})
({1}, {8, 3, 4}, {5})
Solution 4:
({2}, {4}, {5, 7})
({1}, {8, 3}, {5})
({1, 3}, {4}, {5})
Install your Pro kit with your package manager
Create a Nuxt plugin under (plugin/fontawesome.client.ts/js)
Add fontawesome css to nuxt.config.ts
Plugin
import { defineNuxtPlugin } from 'nuxt/app'
import { library, config } from '@fortawesome/fontawesome-svg-core'
import { FontAwesomeIcon } from '@fortawesome/vue-fontawesome'
import { fas } from '@awesome.me/your-kit/icons'
import { far } from '@awesome.me/your-kit/icons'
import { fal } from '@awesome.me/your-kit/icons'
import { fak } from '@awesome.me/your-kit/icons'
// This is important, we are going to let Nuxt worry about the CSS
config.autoAddCss = false
// You can add your icons directly in this plugin. See other examples for how you
// can add other styles or just individual icons.
library.add(fas,far,fal,fak)
export default defineNuxtPlugin(nuxtApp => {
nuxtApp.vueApp.component('icon', FontAwesomeIcon)
})
**
Nuxt.config.ts**
export default defineNuxtConfig({
compatibilityDate: '2024-11-01',
devtools: {
enabled: true,
timeline: {
enabled: true
}
},
css: [
'/assets/css/main.css',
'@fortawesome/fontawesome-svg-core/styles.css'
]
})
**
How to use**
<icon :icon="['fas', 'house']" />
<icon :icon="['far', 'house']" />
<icon :icon="['fal', 'house']" />
<icon :icon="['fak', 'custom-icon-name']" />
Kasus penggunaan Anda sangat menarik dan cukup kompleks dari segi efisiensi penyimpanan dan kebutuhan akses dua arah (label → nilai dan nilai → label). Anda sudah berada di jalur yang sangat baik dengan pertimbangan penggunaan RocksDB dan transformasi representasi label. Saya akan menjawab pertanyaan Anda satu per satu secara terperinci:
Ya, RocksDB mendukung kompresi awalan kunci (prefix compression) secara eksplisit melalui konfigurasi block-based table format. Mekanisme ini sangat berguna jika kunci-kunci Anda memiliki awalan yang panjang dan berulang seperti dalam kasus label hierarkis.
Prefix Compression: Secara default, RocksDB menggunakan prefix compression di dalam setiap blok data (biasanya 4KB secara default), yang menyimpan hanya delta dari awalan kunci sebelumnya.
Key Delta Encoding: Jika Anda menyimpan kunci dalam urutan leksikografis (yang disarankan), RocksDB akan menyimpan hanya perbedaan antara kunci saat ini dengan sebelumnya, yang sangat efisien untuk struktur jalur seperti ferroelectric/optical/drew
.
Anda dapat mengatur ini melalui BlockBasedTableOptions
:
cpp
SalinEdit
options.table_factory.reset(NewBlockBasedTableFactory( BlockBasedTableOptions().SetDataBlockIndexType(kDataBlockBinaryAndHash) ));
Anda juga bisa menambahkan prefix_extractor
(misalnya, rocksdb::NewFixedPrefixTransform(prefix_length)
) untuk membantu filter indexing jika ingin mempercepat pencarian berdasarkan awalan.
➡️ Rekomendasi: Simpan kunci label dalam bentuk string UTF-8 yang diurutkan leksikografis, dan manfaatkan prefix compression bawaan RocksDB.
RocksDB tidak mendukung secara langsung relasi antar kolom dalam satu database layaknya RDBMS (tidak ada foreign key atau join). Tapi:
Anda dapat menggunakan Column Families untuk menyimpan data yang saling terkait:
Misalnya, satu column family menyimpan mapping label → nilai, dan column family lain menyimpan mapping nilai → label (terbalik).
Dengan WriteBatch dan transaksi, Anda bisa menjaga keduanya tetap sinkron.
Namun, RocksDB tidak memiliki kompresi bersama antar column family – setiap column family memiliki kompresinya sendiri, jadi mengulang string panjang di dua tempat bisa memakan ruang lebih.
➡️ Rekomendasi: Jika efisiensi penyimpanan jadi perhatian besar, maka duplikasi label panjang di column family berbeda tidak efisien. Gunakan teknik de-referencing seperti yang Anda jelaskan di poin 3.
Ya, secara signifikan. Pendekatan menggantikan segmen jalur string dengan ID numerik kecil memiliki potensi manfaat besar dalam efisiensi ruang, terutama karena:
String panjang diulang berkali-kali dalam struktur jalur.
Penggunaan ID (misalnya, 32-bit) per segmen memungkinkan representasi kunci menjadi array kecil dari angka tetap (misalnya, 3–5 * 4 byte per label).
RocksDB sangat efisien dalam menyimpan kunci biner pendek.
Anda juga bisa menggabungkan segmen ID menjadi satu key binary blob untuk memaksimalkan kompresi dan urutan leksikografis.
Untuk implementasinya:
Buat satu column family untuk mapping segmen → ID.
Gunakan transactional batch untuk memastikan bahwa setiap insert ke "main store" menggunakan ID yang konsisten.
Pertimbangkan penggunaan encoding seperti varint atau delta encoding jika jumlah segmen besar tapi sebagian besar ID-nya kecil.
➡️ Estimasi efisiensi:
Misalnya, path turbofan/metaphase-insignia-clinch/scenography
sepanjang 45 byte dalam UTF-8.
Dengan 3 segmen dan masing-masing ID 32-bit: hanya 12 byte!
Jika RocksDB menggunakan prefix compression atas ID ini (karena representasi biner urut), maka efisiensi bisa meningkat drastis.
Gunakan prefix compression RocksDB – sangat sesuai dengan karakteristik label Anda.
Untuk kebutuhan dua arah (label ↔ nilai), duplikasi string tidak efisien. Sebaiknya gunakan representasi indirected (ordinal/ID).
Pendekatan segment ID sangat layak dan bisa membawa efisiensi besar dalam penyimpanan dan pencarian – dengan overhead yang bisa dikelola melalui transactional batch.
Pertimbangkan untuk membuat tool internal seperti dictionary segment store + encoder/decoder jalur sebagai lapisan abstraksi di atas RocksDB.
Apakah Anda juga ingin contoh kode bagaimana menyimpan dan meng-encode label menjadi segmen ID di RocksDB dengan transaksi?
This is likely happening because the zendframework/zend-mail package is no longer installed or has been replaced by Laminas in newer Magento versions.
Magento 2 moved from Zend to Laminas
Replace Zend\Mail\Message with Laminas\Mail\Message in your code
I have implemented all the steps for universal link to work, the deep link opens up the app from other apps in the device but when we try to open the app from browser it shows cannot get/ error.apple has added our AASA to its cdn.
Hei,
QoS AtMostOnce is meant such that each device subscribed will get the message AtMostOnce. Ultimately, since multiple devices are subscribed, multiple devices will receive the message.
If you want to achieve similar functionality than SQS, you would need to use AtLastOnce QoS, and handle the delivery logic on your own. E.g., let each connected device have their own downstream MQTT topic, and then using a lambda function iteratively try to publish the packet on the next available topic/thing until it passes (i.e. until it receives a PUBACK). In that case, the clients would have to use CleanSession = True when they reconnect.
However I'm not sure if it's the most ideal use of AWS IoT core.